0. STUDY CONTEXT AND OBJECTIVES

Tile: Psychometric Evaluation of the GIAF: Classical Test Theory and Network Analysis

Authors:
Diego Diaz-Milanes1,2

Affiliation:

1 Department of Quantitative Methods, Universidad Loyola Andalucía, Sevilla, Spain

2 Health Research Institute, University of Canberra, Canberra, ACT, Australia

Reproducibility Statement

This R Markdown file contains the de-identified data and all R code used in the study, ensuring full transparency and reproducibility.

Analysis Workflow

The workflow comprises ten steps:

  1. Data import and preprocessing
    Load the dataset, screen missingness/outliers, and prepare variables.

  2. Descriptive statistics
    Summarize sociodemographics, inspect the distribution of GIAF items, and explore project differences where relevant.

  3. Exploratory Factor Analysis (EFA)
    Estimate the factorial structure on a random subsample using polychoric correlations with ULS extraction and no rotation; determine factors via parallel analysis and related criteria; retain items based on loading/cross-loading/communality rules.

  4. Confirmatory Factor Analysis (CFA)
    Test the EFA-supported model using ULS; report CFI, TLI, RMSEA (90% CI), and SRMR; refine only with theory-consistent modifications.

  5. Measurement invariance (CFA)
    Evaluate configural, metric, scalar (and, if relevant, strict) invariance across groups (e.g., projects); judge invariance via \(\Delta\text{CFI}\), \(\Delta\text{RMSEA}\), and \(\Delta\text{SRMR}\); apply partial invariance if needed.

  6. Reliability
    Compute ordinal \(\alpha\), Split-Half and McDonald’s \(\omega\) for total scale/subscales (polychoric input).

  7. Undirected network modeling
    Estimate a partial correlation (GGM) network of WTM items using EBICglasso.

  8. Network invariance testing
    Compare network structure across projects.

  9. Directed network modeling
    Learn a Bayesian Network (DAG) for potential directional dependencies among GIAF items with bootstrap aggregation.

  10. Predictive evaluation
    Assess model predictability (e.g., \(R^2\), RMSE) via cross-validation and compare performance across models.

1. DATA LOADING & PROCESSING

In this section, the packages, function design, and data used are listed. Then, data processing—including variable selection and filtering by units of analysis—was performed.

1.1. Import Packages & Functions

# ---- Load libraries (already attached above; keep here if you knit this chunk standalone)
suppressPackageStartupMessages({
  library(readxl); library(tidyr); library(dplyr); library(corrplot); library(psych)
  library(parameters); library(lavaan); library(semPlot); library(ggplot2); library(semTools)
  library(tibble); library(purrr); library(effectsize); library(EGAnet); library(qgraph)
  library(NetworkComparisonTest); library(bootnet); library(mgm); library(bnlearn); library(caret)
  library(forcats); library(knitr); library(kableExtra); library(readr); library(readxl)
})

# ---- Helper: safe version + loaded flag
safe_version <- function(pkg) {
  v <- tryCatch(as.character(utils::packageVersion(pkg)), error = function(e) NA_character_)
  if (is.na(v)) "" else v
}
is_loaded <- function(pkg) pkg %in% loadedNamespaces()

# ---- Packages + key functions actually used/reported
pkgs <- tibble::tibble(
  Package   = c(
    "readxl","tidyr","dplyr","psych","semTools","ggplot2","effectsize",
    "EGAnet","qgraph","NetworkComparisonTest","bootnet","mgm","bnlearn",
    "caret","forcats","knitr","kableExtra","lavaan","semPlot","corrplot","haven","parameters"
  ),
  Functions = c(
    "`read_excel`",
    "`pivot_longer`, `pivot_wider`, reshaping support",
    "`select`, `mutate`, `filter`, `group_by`, `summarise`, `bind_rows`",
    "`describe`, `polychoric`, `alpha`, `omega`, `splitHalf`",
    "`mardiaSkew`, `mardiaKurtosis`, invariance helpers",
    "`ggplot`, `aes`, `geom_line`, `labs`, `theme_bw`",
    "`cohens_d`",
    "`bootEGA`, `dimensionStability`",
    "`qgraph`, `EBICglasso`",
    "`NCT`, `plot`",
    "`estimateNetwork`, `bootnet`, `corStability`",
    "`mgm`, `predict`",
    "`boot.strength`, `averaged.network`, `bn.fit`, `predict`",
    "`confusionMatrix`",
    "`fct_reorder`",
    "`kable`",
    "`kable_styling`, `add_header_above`",
    "`cfa`, `fitMeasures`",
    "`semPaths`",
    "`corrplot`",
    "`read_sav`, `as_factor`",
    "`model_parameters`"
  )
)

# ---- Add Version and Loaded columns
packages_info <- pkgs %>%
  mutate(
    Version = vapply(Package, safe_version, character(1)),
    Loaded  = ifelse(vapply(Package, is_loaded, logical(1)), "Yes", "No")
  ) %>%
  select(Package, Version, Loaded, Functions)

# ---- Display table
knitr::kable(
  packages_info,
  caption = "Table 1. Packages used, versions, loaded status, and key functions"
) %>%
  kableExtra::kable_styling(full_width = FALSE, position = "center")
Table 1. Packages used, versions, loaded status, and key functions
Package Version Loaded Functions
readxl 1.4.3 Yes read_excel
tidyr 1.3.1 Yes pivot_longer, pivot_wider, reshaping support
dplyr 1.1.4 Yes select, mutate, filter, group_by, summarise, bind_rows
psych 2.4.1 Yes describe, polychoric, alpha, omega, splitHalf
semTools 0.5.6 Yes mardiaSkew, mardiaKurtosis, invariance helpers
ggplot2 3.5.2 Yes ggplot, aes, geom_line, labs, theme_bw
effectsize 0.8.9 Yes cohens_d
EGAnet 2.0.4 Yes bootEGA, dimensionStability
qgraph 1.9.8 Yes qgraph, EBICglasso
NetworkComparisonTest 2.2.2 Yes NCT, plot
bootnet 1.5.6 Yes estimateNetwork, bootnet, corStability
mgm 1.2.14 Yes mgm, predict
bnlearn 4.9 Yes boot.strength, averaged.network, bn.fit, predict
caret 6.0.94 Yes confusionMatrix
forcats 1.0.0 Yes fct_reorder
knitr 1.45 Yes kable
kableExtra 1.4.0 Yes kable_styling, add_header_above
lavaan 0.6.17 Yes cfa, fitMeasures
semPlot 1.1.6 Yes semPaths
corrplot 0.92 Yes corrplot
haven 2.5.4 No read_sav, as_factor
parameters 0.22.1 Yes model_parameters

Estamos utilizando 22 paquetes.

Design and generation of a formula for plotting centrality measures of more than one network.

compareCentrality <- function(net1, net2,
                              include = c("Strength",
                                          "Closeness",
                                          "Betweenness",
                                          "ExpectedInfluence",
                                          "all",
                                          "All"),
                              orderBy = c("Strength",
                                          "Closeness",
                                          "Betweenness",
                                          "ExpectedInfluence"),
                              decreasing = T,
                              legendName = '',
                              net1Name = 'Network 1',
                              net2Name = 'Network 2'){
  
  if(include == "All" | include == "all"){
    include = c("Strength",
                "Closeness",
                "Betweenness",
                "ExpectedInfluence")
  }
  
  df <- centralityTable(net1, net2) %>% filter(measure %in% include)
  
  df %>% 
    mutate(graph = case_when(graph == 'graph 1' ~ net1Name,
                             graph == 'graph 2' ~ net2Name),
           graph = as.factor(graph),
           node = as.factor(node)) %>% 
    
    mutate(node = fct_reorder(node, value)) %>% 
    
    ggplot(aes(x = node, y = value, group = graph)) +
    
    geom_line(aes(linetype = graph), size = 1) +
    
    labs(x = '', y = '') +
    
    scale_linetype_discrete(name = legendName) +
    
    coord_flip() +
    
    facet_grid(~measure) +
    
    theme_bw()
  
}

compareCentrality3 <- function(net1, net2, net3,
                               include = c("Strength","Closeness","Betweenness","ExpectedInfluence","all","All"),
                               orderBy = c("Strength","Closeness","Betweenness","ExpectedInfluence"),
                               decreasing = TRUE,
                               legendName = "",
                               net1Name = "Network 1",
                               net2Name = "Network 2",
                               net3Name = "Network 3") {

  orderBy <- match.arg(orderBy)

  if (any(include %in% c("All","all"))) {
    include <- c("Strength","Closeness","Betweenness","ExpectedInfluence")
  }

  df <- centralityTable(net1, net2, net3) |>
    dplyr::filter(measure %in% include) |>
    dplyr::mutate(
      graph = dplyr::case_when(
        graph == "graph 1" ~ net1Name,
        graph == "graph 2" ~ net2Name,
        graph == "graph 3" ~ net3Name,
        TRUE ~ graph
      ),
      graph = as.factor(graph),
      node  = as.factor(node)
    )

  # Order nodes by the selected measure (mean across graphs)
  ref_order <- df |>
    dplyr::filter(measure == orderBy) |>
    dplyr::group_by(node) |>
    dplyr::summarise(ref = mean(value, na.rm = TRUE), .groups = "drop")

  df <- df |>
    dplyr::left_join(ref_order, by = "node") |>
    dplyr::mutate(node = forcats::fct_reorder(node, ref, .desc = decreasing))

  ggplot2::ggplot(df, ggplot2::aes(x = node, y = value, group = graph)) +
    ggplot2::geom_line(ggplot2::aes(linetype = graph), linewidth = 1) +
    ggplot2::labs(x = "", y = "") +
    ggplot2::scale_linetype_discrete(name = legendName) +
    ggplot2::coord_flip() +
    ggplot2::facet_grid(~ measure) +
    ggplot2::theme_bw()
}

compareCentrality3_color <- function(net1, net2, net3,
                                     include = c("Strength","Closeness","Betweenness","ExpectedInfluence","all","All"),
                                     orderBy = c("Strength","Closeness","Betweenness","ExpectedInfluence"),
                                     decreasing = TRUE,
                                     legendName = "",
                                     net1Name = "Network 1",
                                     net2Name = "Network 2",
                                     net3Name = "Network 3",
                                     addPoints = TRUE) {

  orderBy <- match.arg(orderBy)

  if (any(include %in% c("All","all"))) {
    include <- c("Strength","Closeness","Betweenness","ExpectedInfluence")
  }

  df <- centralityTable(net1, net2, net3) |>
    dplyr::filter(measure %in% include) |>
    dplyr::mutate(
      graph = dplyr::case_when(
        graph == "graph 1" ~ net1Name,
        graph == "graph 2" ~ net2Name,
        graph == "graph 3" ~ net3Name,
        TRUE ~ graph
      ),
      graph = as.factor(graph),
      node  = as.factor(node)
    )

  # Order nodes by the selected measure (mean across graphs)
  ref_order <- df |>
    dplyr::filter(measure == orderBy) |>
    dplyr::group_by(node) |>
    dplyr::summarise(ref = mean(value, na.rm = TRUE), .groups = "drop")

  df <- df |>
    dplyr::left_join(ref_order, by = "node") |>
    dplyr::mutate(node = forcats::fct_reorder(node, ref, .desc = decreasing))

  p <- ggplot2::ggplot(df, ggplot2::aes(x = node, y = value, group = graph, color = graph)) +
    ggplot2::geom_line(linewidth = 1) +
    ggplot2::labs(x = "", y = "") +
    ggplot2::coord_flip() +
    ggplot2::facet_grid(~ measure) +
    ggplot2::theme_bw() +
    ggplot2::scale_color_discrete(name = legendName)

  if (isTRUE(addPoints)) {
    p <- p + ggplot2::geom_point(size = 2)
  }

  p
}

perc_attenuation <- function(x){
  poly_x <- polychoric(x)
  poly_alpha_x <- psych::alpha(poly_x$rho)
  ordinal_alpha <- poly_alpha_x$total$raw_alpha
  
  raw_x <- psych::alpha(x) #Regarding covariance matrix
  raw_alpha <- raw_x$total$raw_alpha
  
  perc_atten <- (raw_alpha - ordinal_alpha)/ordinal_alpha*100
  raw_alpha <- paste("Raw alpha:", round(raw_alpha, digits = 4), sep = " ")
  ordinal_alpha <- paste("Ordinal alpha:", round(ordinal_alpha, digits = 4), sep = " ")
  perc_atten <- paste("Percent attenuation:",round(perc_atten, digits = 2),sep = " ")
  total <- paste(raw_alpha, ordinal_alpha, perc_atten, sep = "\n")
  total <- noquote(strsplit(total, "\n")[[1]])
  return(total)
}

1.2. Data Loading

The dataset from the #### project (Ethical Committee: ####) was loaded, including only the minimum set of variables required for the present study.

df_DRA <- read_excel("data_cindy.xlsx", sheet = "DRA")
df_DRA
df_Diskussionsforum <- read_excel("data_cindy.xlsx", sheet = "Diskussionsforum")
df_Diskussionsforum
df_CMHA <- read_excel("data_cindy.xlsx", sheet = "CMHA")
df_CMHA
df_general <- rbind(df_DRA, df_Diskussionsforum)
df_general <- rbind(df_general, df_CMHA)
df_general

1.3. Filtering

Filters were applied to remove participants who did not complete the isntrument or indicate their gender, and to exclude cases with missing values:

DRA

initial_rows <- nrow(df_DRA)
df_DRA <- as.data.frame(df_DRA)
df_DRA <- na.omit(df_DRA)
df_DRA <- df_DRA %>%
  filter(if_all(2:11, ~ . != 0))

print(paste("Initial number of rows:", initial_rows, "| Final number of rows:", nrow(df_DRA)))
## [1] "Initial number of rows: 276 | Final number of rows: 241"
DRA_df <- dplyr::select(df_DRA,-c(1))

Diskussionsforum

initial_rows <- nrow(df_Diskussionsforum)
df_Diskussionsforum <- as.data.frame(df_Diskussionsforum)
df_Diskussionsforum <- na.omit(df_Diskussionsforum)
df_Diskussionsforum <- df_Diskussionsforum %>%
  filter(if_all(2:11, ~ . != 0))

print(paste("Initial number of rows:", initial_rows, "| Final number of rows:", nrow(df_Diskussionsforum)))
## [1] "Initial number of rows: 385 | Final number of rows: 253"
Diskussionsforum_df <- dplyr::select(df_Diskussionsforum,-c(1))

CMHA

initial_rows <- nrow(df_CMHA)
df_CMHA <- as.data.frame(df_CMHA)
df_CMHA <- na.omit(df_CMHA)
df_CMHA <- df_CMHA %>%
  filter(if_all(2:11, ~ . != 0))

print(paste("Initial number of rows:", initial_rows, "| Final number of rows:", nrow(df_CMHA)))
## [1] "Initial number of rows: 112 | Final number of rows: 105"
CMHA_df <- dplyr::select(df_CMHA,-c(1))

Total

initial_rows <- nrow(df_general)
df_general <- as.data.frame(df_general)
df_general <- na.omit(df_general)
df_general <- df_general %>%
  filter(if_all(2:11, ~ . != 0))

print(paste("Initial number of rows:", initial_rows, "| Final number of rows:", nrow(df_general)))
## [1] "Initial number of rows: 773 | Final number of rows: 599"
giaf_df <- dplyr::select(df_general,-c(1))

1.4. Variable Selection

Creation of dataframes tailored to meet the requirements of the forthcoming data analyses.

1. Dataframe for the analysis of individual items

The following table displays the first five rows of the set of items from the WTM instrument, used in the present study.

# Prepare the dataframe
df_general$project <- as.factor(df_general$project)
pieColor <- rep("#EB9446", length(giaf_df))

# Display first 5 rows with a formatted table
head(giaf_df, 5) %>%
  kable(caption = "Table 2. Set of items from the GIAF") %>%
  kable_styling(full_width = FALSE, position = "center")
Table 2. Set of items from the GIAF
REL1 REL2 REL3 REL4 ACC APP PRAC1 PRAC2 EFF VAL
5 5 5 5 5 5 5 5 4 5
6 5 4 5 7 5 7 6 7 5
7 7 4 6 6 7 6 6 4 4
7 7 6 6 6 6 6 6 5 6
2 6 2 3 7 5 7 7 4 5

3. Dataframe for assessing model invariance across project

The following table displays the first five rows of the dataset used for testing network invariance by gender, including the GIAF items and participants’ project.

# Select WTM items and project for invariance analysis
giaf_invariance <- df_general

# Display first 5 rows with a styled table
head(giaf_invariance, 5) %>%
  kable(caption = "Table 4. GIAF items and project variable for network invariance analysis") %>%
  kable_styling(full_width = FALSE, position = "center")
Table 4. GIAF items and project variable for network invariance analysis
project REL1 REL2 REL3 REL4 ACC APP PRAC1 PRAC2 EFF VAL
DRA 5 5 5 5 5 5 5 5 4 5
DRA 6 5 4 5 7 5 7 6 7 5
DRA 7 7 4 6 6 7 6 6 4 4
DRA 7 7 6 6 6 6 6 6 5 6
DRA 2 6 2 3 7 5 7 7 4 5

2. SOCIODEMOGRAPHIC

Participants distribution by project:

# Gender distribution
project_table <- table(giaf_invariance$project)
project_percent <- round(prop.table(project_table) * 100, 2)

project_df <- data.frame(
  Project = names(project_table),
  Frequency = round(as.numeric(project_table), 2),
  Percentage = paste0(project_percent, "%")
)

# Display tables
kable(project_df, caption = "Table 5. Project distribution (frequency and %)") %>%
  kable_styling(full_width = FALSE, position = "center")
Table 5. Project distribution (frequency and %)
Project Frequency Percentage
CMHA 105 17.53%
Diskussionsforum 253 42.24%
DRA 241 40.23%

2. DESCRIPTIVE ANALYSIS

2.1. Total Sample

2.1.1. Items Total Sample

Univariate descriptive analysis of the items included in the GIAF instrument in the total sample:

# Descriptive statistics
descrgen <- describe(giaf_df)

# Build dataframe with selected stats
lgen <- list(c(1:10), descrgen$mean, descrgen$sd, descrgen$min,
             descrgen$max, descrgen$skew, descrgen$kurtosis)
lgen <- as.data.frame(lgen)

floor_pct  <- sapply(giaf_df, function(x) mean(x == min(x, na.rm = TRUE), na.rm = TRUE) * 100)
ceiling_pct <- sapply(giaf_df, function(x) mean(x == max(x, na.rm = TRUE), na.rm = TRUE) * 100)

lgen <- cbind(lgen,floor_pct)
lgen <- cbind(lgen,ceiling_pct)

# Round and rename columns
lgen <- lgen %>% 
  mutate_if(is.numeric, round, digits = 3)

colnames(lgen) <- c("Item", "Mean", "SD", "Min", "Max", "Skew", "Kurtosis", "Floor%", "Ceiling%")

# Display table
kable(lgen, caption = "Table 7. Descriptive statistics for GIAF items") %>%
  kable_styling(full_width = FALSE, position = "center")
Table 7. Descriptive statistics for GIAF items
Item Mean SD Min Max Skew Kurtosis Floor% Ceiling%
REL1 1 6.164 0.976 1 7 -1.784 5.301 0.668 43.239
REL2 2 6.137 0.965 1 7 -1.669 4.767 0.501 41.068
REL3 3 5.943 0.966 1 7 -1.131 1.973 0.167 29.883
REL4 4 5.958 0.983 1 7 -1.340 3.094 0.334 30.885
ACC 5 6.215 0.876 1 7 -1.592 4.606 0.334 42.571
APP 6 6.132 0.893 1 7 -1.457 4.088 0.334 37.563
PRAC1 7 5.731 1.121 1 7 -1.392 2.623 0.668 22.871
PRAC2 8 5.818 0.955 1 7 -1.461 3.865 0.334 20.200
EFF 9 5.384 1.148 1 7 -0.322 -0.585 0.167 17.529
VAL 10 5.716 1.001 1 7 -0.717 0.419 0.167 21.369

Assessment of multivariate normality assumptions based on Mardia’s tests for skewness and kurtosis, applied to the set of items included in the GIAF instrument:

# Run Mardia's tests
skew_result <- mardiaSkew(giaf_df)
kurt_result <- mardiaKurtosis(giaf_df)

# Round and extract values
mardia_df <- data.frame(
  Test = c("Mardia Skewness", "Mardia Kurtosis"),
  Statistic = round(c(skew_result[2], kurt_result[2]), 3),
  p_value = round(c(skew_result[4], kurt_result[3]), 3)
)
row.names(mardia_df) <- 1:2
# Show table
kable(mardia_df, caption = "Table 8: Mardia's Tests for Multivariate Normality in the total sample")
Table 8: Mardia’s Tests for Multivariate Normality in the total sample
Test Statistic p_value
Mardia Skewness 3230.358 0
Mardia Kurtosis 84.626 0

2.2. DRA Sample

2.2.1. Items DRA Sample

Univariate descriptive analysis of the items included in the GIAF instrument in the DRA sample:

# Descriptive statistics
descrgen <- describe(DRA_df)

# Build dataframe with selected stats
lgen <- list(c(1:10), descrgen$mean, descrgen$sd, descrgen$min,
             descrgen$max, descrgen$skew, descrgen$kurtosis)
lgen <- as.data.frame(lgen)

floor_pct  <- sapply(DRA_df, function(x) mean(x == min(x, na.rm = TRUE), na.rm = TRUE) * 100)
ceiling_pct <- sapply(DRA_df, function(x) mean(x == max(x, na.rm = TRUE), na.rm = TRUE) * 100)

lgen <- cbind(lgen,floor_pct)
lgen <- cbind(lgen,ceiling_pct)

# Round and rename columns
lgen <- lgen %>% 
  mutate_if(is.numeric, round, digits = 3)

colnames(lgen) <- c("Item", "Mean", "SD", "Min", "Max", "Skew", "Kurtosis", "Floor%", "Ceiling%")

# Display table
kable(lgen, caption = "Table 7. Descriptive statistics for GIAF items in DRA") %>%
  kable_styling(full_width = FALSE, position = "center")
Table 7. Descriptive statistics for GIAF items in DRA
Item Mean SD Min Max Skew Kurtosis Floor% Ceiling%
REL1 1 6.295 1.017 1 7 -2.221 6.820 0.830 53.527
REL2 2 6.307 0.973 1 7 -2.127 6.288 0.415 52.697
REL3 3 5.851 1.081 1 7 -1.337 2.296 0.415 27.386
REL4 4 5.917 0.980 1 7 -1.344 3.223 0.415 27.801
ACC 5 6.286 0.835 3 7 -1.044 0.646 0.415 48.963
APP 6 6.166 0.956 1 7 -1.645 4.329 0.415 42.324
PRAC1 7 5.473 1.317 1 7 -1.109 1.039 1.245 20.332
PRAC2 8 5.705 1.080 1 7 -1.354 2.462 0.415 19.502
EFF 9 5.071 1.214 2 7 0.283 -1.217 0.415 16.598
VAL 10 5.556 1.125 3 7 -0.419 -0.936 2.490 21.162

Assessment of multivariate normality assumptions based on Mardia’s tests for skewness and kurtosis, applied to the set of items included in the GIAF instrument:

# Run Mardia's tests
skew_result <- mardiaSkew(DRA_df)
kurt_result <- mardiaKurtosis(DRA_df)

# Round and extract values
mardia_df <- data.frame(
  Test = c("Mardia Skewness", "Mardia Kurtosis"),
  Statistic = round(c(skew_result[2], kurt_result[2]), 3),
  p_value = round(c(skew_result[4], kurt_result[3]), 3)
)
row.names(mardia_df) <- 1:2
# Show table
kable(mardia_df, caption = "Table 8: Mardia's Tests for Multivariate Normality in the DRA sample")
Table 8: Mardia’s Tests for Multivariate Normality in the DRA sample
Test Statistic p_value
Mardia Skewness 2307.930 0
Mardia Kurtosis 53.644 0

2.3. Diskussionsforum Sample

2.3.1. Items Diskussionsforum Sample

Univariate descriptive analysis of the items included in the GIAF instrument in the Diskussionsforum sample:

# Descriptive statistics
descrgen <- describe(Diskussionsforum_df)

# Build dataframe with selected stats
lgen <- list(c(1:10), descrgen$mean, descrgen$sd, descrgen$min,
             descrgen$max, descrgen$skew, descrgen$kurtosis)
lgen <- as.data.frame(lgen)

floor_pct  <- sapply(Diskussionsforum_df, function(x) mean(x == min(x, na.rm = TRUE), na.rm = TRUE) * 100)
ceiling_pct <- sapply(Diskussionsforum_df, function(x) mean(x == max(x, na.rm = TRUE), na.rm = TRUE) * 100)

lgen <- cbind(lgen,floor_pct)
lgen <- cbind(lgen,ceiling_pct)

# Round and rename columns
lgen <- lgen %>% 
  mutate_if(is.numeric, round, digits = 3)

colnames(lgen) <- c("Item", "Mean", "SD", "Min", "Max", "Skew", "Kurtosis", "Floor%", "Ceiling%")

# Display table
kable(lgen, caption = "Table 7. Descriptive statistics for GIAF items in Diskussionsforum") %>%
  kable_styling(full_width = FALSE, position = "center")
Table 7. Descriptive statistics for GIAF items in Diskussionsforum
Item Mean SD Min Max Skew Kurtosis Floor% Ceiling%
REL1 1 6.067 0.868 3 7 -0.891 0.993 1.581 34.387
REL2 2 5.960 0.921 2 7 -0.773 0.806 0.395 31.621
REL3 3 6.111 0.852 3 7 -0.748 0.304 0.791 37.549
REL4 4 5.917 1.006 2 7 -1.067 1.527 0.791 31.225
ACC 5 6.119 0.922 1 7 -1.595 4.347 0.395 36.759
APP 6 6.051 0.817 3 7 -0.746 0.681 0.791 30.830
PRAC1 7 5.822 0.949 2 7 -1.139 2.224 0.791 22.134
PRAC2 8 5.870 0.861 2 7 -1.124 3.000 0.791 20.949
EFF 9 5.727 0.972 2 7 -0.830 0.977 0.791 20.158
VAL 10 5.893 0.831 2 7 -0.626 1.026 0.395 23.320

Assessment of multivariate normality assumptions based on Mardia’s tests for skewness and kurtosis, applied to the set of items included in the GIAF instrument:

# Run Mardia's tests
skew_result <- mardiaSkew(Diskussionsforum_df)
kurt_result <- mardiaKurtosis(Diskussionsforum_df)

# Round and extract values
mardia_df <- data.frame(
  Test = c("Mardia Skewness", "Mardia Kurtosis"),
  Statistic = round(c(skew_result[2], kurt_result[2]), 3),
  p_value = round(c(skew_result[4], kurt_result[3]), 3)
)
row.names(mardia_df) <- 1:2
# Show table
kable(mardia_df, caption = "Table 8: Mardia's Tests for Multivariate Normality in the Diskussionsforum sample")
Table 8: Mardia’s Tests for Multivariate Normality in the Diskussionsforum sample
Test Statistic p_value
Mardia Skewness 1099.828 0
Mardia Kurtosis 31.304 0

2.4. CMHA Sample

2.4.1. Items CMHA Sample

Univariate descriptive analysis of the items included in the GIAF instrument in the CMHA sample:

# Descriptive statistics
descrgen <- describe(CMHA_df)

# Build dataframe with selected stats
lgen <- list(c(1:10), descrgen$mean, descrgen$sd, descrgen$min,
             descrgen$max, descrgen$skew, descrgen$kurtosis)
lgen <- as.data.frame(lgen)

floor_pct  <- sapply(CMHA_df, function(x) mean(x == min(x, na.rm = TRUE), na.rm = TRUE) * 100)
ceiling_pct <- sapply(CMHA_df, function(x) mean(x == max(x, na.rm = TRUE), na.rm = TRUE) * 100)

lgen <- cbind(lgen,floor_pct)
lgen <- cbind(lgen,ceiling_pct)

# Round and rename columns
lgen <- lgen %>% 
  mutate_if(is.numeric, round, digits = 3)

colnames(lgen) <- c("Item", "Mean", "SD", "Min", "Max", "Skew", "Kurtosis", "Floor%", "Ceiling%")

# Display table
kable(lgen, caption = "Table 7. Descriptive statistics for GIAF items in CMHA") %>%
  kable_styling(full_width = FALSE, position = "center")
Table 7. Descriptive statistics for GIAF items in CMHA
Item Mean SD Min Max Skew Kurtosis Floor% Ceiling%
REL1 1 6.095 1.097 1 7 -2.180 7.080 1.905 40.952
REL2 2 6.171 0.985 1 7 -2.854 12.239 1.905 37.143
REL3 3 5.752 0.886 3 7 -0.738 0.597 1.905 17.143
REL4 4 6.152 0.918 1 7 -2.150 8.662 0.952 37.143
ACC 5 6.286 0.840 1 7 -2.687 13.442 0.952 41.905
APP 6 6.248 0.907 1 7 -2.338 9.516 0.952 42.857
PRAC1 7 6.105 0.854 1 7 -2.217 10.550 0.952 30.476
PRAC2 8 5.952 0.836 1 7 -2.158 10.284 0.952 20.000
EFF 9 5.276 1.164 1 7 -0.543 0.437 0.952 13.333
VAL 10 5.657 1.017 1 7 -1.131 2.681 0.952 17.143

Assessment of multivariate normality assumptions based on Mardia’s tests for skewness and kurtosis, applied to the set of items included in the GIAF instrument:

# Run Mardia's tests
skew_result <- mardiaSkew(CMHA_df)
kurt_result <- mardiaKurtosis(CMHA_df)

# Round and extract values
mardia_df <- data.frame(
  Test = c("Mardia Skewness", "Mardia Kurtosis"),
  Statistic = round(c(skew_result[2], kurt_result[2]), 3),
  p_value = round(c(skew_result[4], kurt_result[3]), 3)
)
row.names(mardia_df) <- 1:2
# Show table
kable(mardia_df, caption = "Table 8: Mardia's Tests for Multivariate Normality")
Table 8: Mardia’s Tests for Multivariate Normality
Test Statistic p_value
Mardia Skewness 921.914 0
Mardia Kurtosis 21.199 0

3. EXPLORATORY FACTOR ANALYSIS (EFA)

3.1. Total Sample

Selects the ten GIAF items from giaf_df, computes a polychoric correlation matrix (appropriate for Likert/ordinal items), and visualizes the correlations with corrplot (numbers shown in the cells). Use this to spot very low or very high inter-item associations and potential redundancy.

datamatrix <- polychoric(giaf_df)
corrplot(datamatrix$rho, method="number")

Assesses whether the data are suitable for EFA using the polychoric matrix:

  • KMO / MSA: overall and item-level sampling adequacy (rules of thumb: ≥ .60 acceptable; ≥ .80 meritorious).
  • Bartlett’s test: a significant p-value supports factorability (correlation matrix differs from identity).
  • Determinant & eigenvalues: very small determinant (e.g., < 1e-5) or non-positive eigenvalues can indicate multicollinearity or a non–positive definite matrix.

The table summarizes values and interpretation guidelines.

# --- Sampling adequacy and factorability ---
kmo <- psych::KMO(datamatrix$rho)                       # Global KMO and item-level MSA
bart <- psych::cortest.bartlett(datamatrix$rho, 
                                 n = nrow(giaf_df))        # Bartlett's test of sphericity
det_r <- det(datamatrix$rho)                            # Determinant of the correlation matrix
eig  <- eigen(datamatrix$rho, only.values = TRUE)$values # Eigenvalues

# Summary table with interpretation guidelines
tbl_adeq <- tibble::tibble(
  Metric = c("KMO (global)", "KMO (minimum MSA)", "Bartlett χ²", "Bartlett df", "Bartlett p-value",
             "Determinant", "Minimum eigenvalue"),
  Value  = c(kmo$MSA, min(kmo$MSAi, na.rm = TRUE), unname(bart$chisq),
             bart$df, bart$p.value, det_r, min(eig)),
  `Guideline (interpretive)` = c(
    "≥ .60 acceptable; ≥ .80 meritorious",
    "≥ .50 acceptable (review items < .50)",
    "",
    "",
    "p < .05 supports factorability",
    "< 1e-5 suggests multicollinearity",
    "> 0 (positive-definite matrix)"
  )
) |>
  mutate(Value = round(Value, 4))

knitr::kable(tbl_adeq, caption = "#. Sampling adequacy and factorability tests") |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
  1. Sampling adequacy and factorability tests
Metric Value Guideline (interpretive)
KMO (global) 0.9033 ≥ .60 acceptable; ≥ .80 meritorious
KMO (minimum MSA) 0.8414 ≥ .50 acceptable (review items < .50)
Bartlett χ² 4318.5476
Bartlett df 45.0000
Bartlett p-value 0.0000 p < .05 supports factorability
Determinant 0.0007 < 1e-5 suggests multicollinearity
Minimum eigenvalue 0.1278 > 0 (positive-definite matrix)

Estimates the number of factors to retain using n_factors on the polychoric matrix (FA / minres, no rotation). The plot shows parallel analysis and related criteria (e.g., optimal coordinates, acceleration factor) against the empirical eigenvalues—use convergent evidence from these rules to pick the factor count.

#n factors
results_nfactor <- n_factors(giaf_df, type = "FA", rotation = "none",
  algorithm = "minres", package = "all",
  cor = datamatrix$rho, safe = TRUE, n_max = NULL)
results_nfactor

Prints the full results object from n_factors so you can read off each decision rule (e.g., PA, OC, AF) and the suggested number of factors per criterion.

data.frame("n_Factors"=results_nfactor$n_Factors,
"Method"=results_nfactor$Method
)

Fits the EFA model with 1 factor using weighted least squares (fm = "wls") on polychoric correlations and no rotation (since one factor). WTM_factor$Vaccounted reports variance explained: SS loadings, proportion, and cumulative proportion for the factor solution.

#Exploratory Factorial Analysis ASI
WTM_factor<-fa(giaf_df,nfactors = 1,fm = "wls",rotate ="none",cor = "poly")
WTM_factor$Vaccounted
##                     WLS1
## SS loadings    5.6249510
## Proportion Var 0.5624951

Builds a clean loadings table for the first factor and adds:

  • h2 (communality): variance in each item explained by the factor.
  • u2 (uniqueness): residual item variance (1 − h2).
  • com (complexity): how many factors are needed to account for an item (≈1 in a clear one-factor solution).

Values are rounded to three decimals and displayed with item names as row labels.

# Extraer matriz de cargas
L <- as.data.frame(unclass(WTM_factor$loadings))

# Tomar la 1ª columna de cargas (p.ej., "WLS1")
fact1 <- L[, 1, drop = FALSE]
colnames(fact1) <- colnames(L)[1]

# Comunualidades (h2), unicidades (u2) y complejidad (com)
h2 <- if (!is.null(WTM_factor$communality)) {
  WTM_factor$communality
} else if (!is.null(WTM_factor$communalities)) {
  WTM_factor$communalities
} else {
  rowSums(L^2)  # respaldo
}

u2 <- if (!is.null(WTM_factor$uniquenesses)) WTM_factor$uniquenesses else 1 - h2
com <- if (!is.null(WTM_factor$complexity))  WTM_factor$complexity  else rep(NA_real_, nrow(L))

# Armar y redondear tabla final
tab <- cbind(fact1, h2 = h2, u2 = u2, com = com)
tab <- round(tab, 3)

# Mostrar
knitr::kable(tab, caption = "Cargas (Factor 1), h2, u2 y complejidad (com)", align = "r") |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
Cargas (Factor 1), h2, u2 y complejidad (com)
WLS1 h2 u2 com
REL1 0.800 0.640 0.360 1
REL2 0.788 0.621 0.379 1
REL3 0.711 0.506 0.494 1
REL4 0.783 0.613 0.387 1
ACC 0.843 0.710 0.290 1
APP 0.850 0.723 0.277 1
PRAC1 0.656 0.431 0.569 1
PRAC2 0.715 0.511 0.489 1
EFF 0.552 0.305 0.695 1
VAL 0.752 0.565 0.435 1

Draws the factor diagram for the fitted EFA, showing the single latent factor, item loadings, and residuals. Use it for a quick visual check of loading magnitudes and model structure.

fa.diagram(WTM_factor)

Fits the EFA model with 2 factor using weighted least squares (fm = "wls") on polychoric correlations and no rotation (since one factor). WTM_factor$Vaccounted reports variance explained: SS loadings, proportion, and cumulative proportion for the factor solution.

#Exploratory Factorial Analysis ASI
WTM_factor<-fa(giaf_df,nfactors = 2,fm = "wls",rotate ="oblimin",cor = "poly")
## Loading required namespace: GPArotation
WTM_factor$Vaccounted
##                            WLS1      WLS2
## SS loadings           4.1395336 2.2871648
## Proportion Var        0.4139534 0.2287165
## Cumulative Var        0.4139534 0.6426698
## Proportion Explained  0.6441151 0.3558849
## Cumulative Proportion 0.6441151 1.0000000

Builds a clean loadings table for the two factor and adds:

  • h2 (communality): variance in each item explained by the factor.
  • u2 (uniqueness): residual item variance (1 − h2).
  • com (complexity): how many factors are needed to account for an item (≈1 in a clear one-factor solution).

Values are rounded to three decimals and displayed with item names as row labels.

# Extraer matriz completa de cargas como data.frame
L <- as.data.frame(unclass(WTM_factor$loadings))

# Si no tienen nombres claros, pon "F1","F2",...
if (is.null(colnames(L)) || any(colnames(L) == ""))
  colnames(L) <- paste0("F", seq_len(ncol(L)))

# --- Comunalidades (h2), unicidades (u2) y complejidad (com) (robusto a versiones)
h2 <- if (!is.null(WTM_factor$h2)) {
  WTM_factor$h2
} else if (!is.null(WTM_factor$communality)) {
  WTM_factor$communality
} else if (!is.null(WTM_factor$communalities)) {
  WTM_factor$communalities
} else {
  rowSums(as.matrix(L)^2, na.rm = TRUE)
}

u2 <- if (!is.null(WTM_factor$u2)) {
  WTM_factor$u2
} else if (!is.null(WTM_factor$uniquenesses)) {
  WTM_factor$uniquenesses
} else {
  1 - h2
}

com <- if (!is.null(WTM_factor$complexity)) WTM_factor$complexity else rep(NA_real_, nrow(L))

# Alinear por nombre de ítem, por si acaso
aligner <- rownames(L)
h2 <- h2[aligner]
u2 <- u2[aligner]
if (!is.null(names(com))) com <- com[aligner]

# Asignación de ítem al factor con mayor |carga|
which_fac <- apply(abs(as.matrix(L)), 1, function(x) colnames(L)[which.max(x)])
max_abs   <- apply(abs(as.matrix(L)), 1, max)

# Tabla final
tab <- cbind(L, h2 = h2, u2 = u2, com = com,
             Assigned = which_fac, MaxAbs = max_abs)

# Ordena por factor asignado y magnitud de carga
tab <- tab[order(tab[,"Assigned"], -tab[,"MaxAbs"]), , drop = FALSE]

# Redondeo
tab[, colnames(L)] <- round(tab[, colnames(L), drop = FALSE], 3)
tab[, c("h2","u2","com","MaxAbs")] <- round(tab[, c("h2","u2","com","MaxAbs")], 3)

# Mostrar
knitr::kable(
  tab,
  caption = "Cargas factoriales (todos los factores), comunalidades (h2), unicidades (u2) y complejidad",
  align = "r"
) |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
Cargas factoriales (todos los factores), comunalidades (h2), unicidades (u2) y complejidad
WLS1 WLS2 h2 u2 com Assigned MaxAbs
REL1 0.929 -0.102 0.756 0.244 1.024 WLS1 0.929
REL2 0.908 -0.094 0.728 0.272 1.021 WLS1 0.908
APP 0.785 0.115 0.741 0.259 1.043 WLS1 0.785
ACC 0.742 0.155 0.716 0.284 1.087 WLS1 0.742
REL4 0.647 0.197 0.614 0.386 1.184 WLS1 0.647
REL3 0.460 0.331 0.509 0.491 1.818 WLS1 0.460
EFF -0.110 0.840 0.604 0.396 1.034 WLS2 0.840
VAL 0.155 0.772 0.766 0.234 1.080 WLS2 0.772
PRAC2 0.358 0.459 0.541 0.459 1.887 WLS2 0.459
PRAC1 0.321 0.425 0.452 0.548 1.860 WLS2 0.425

Draws the factor diagram for the fitted EFA, showing the single latent factor, item loadings, and residuals. Use it for a quick visual check of loading magnitudes and model structure.

fa.diagram(WTM_factor)

3.2. DRA Sample

Selects the ten GIAF items from DRA_df, computes a polychoric correlation matrix (appropriate for Likert/ordinal items), and visualizes the correlations with corrplot (numbers shown in the cells). Use this to spot very low or very high inter-item associations and potential redundancy.

datamatrix <- polychoric(DRA_df)
corrplot(datamatrix$rho, method="number")

Assesses whether the data are suitable for EFA using the polychoric matrix:

  • KMO / MSA: overall and item-level sampling adequacy (rules of thumb: ≥ .60 acceptable; ≥ .80 meritorious).
  • Bartlett’s test: a significant p-value supports factorability (correlation matrix differs from identity).
  • Determinant & eigenvalues: very small determinant (e.g., < 1e-5) or non-positive eigenvalues can indicate multicollinearity or a non–positive definite matrix.

The table summarizes values and interpretation guidelines.

# --- Sampling adequacy and factorability ---
kmo <- psych::KMO(datamatrix$rho)                       # Global KMO and item-level MSA
bart <- psych::cortest.bartlett(datamatrix$rho, 
                                 n = nrow(DRA_df))        # Bartlett's test of sphericity
det_r <- det(datamatrix$rho)                            # Determinant of the correlation matrix
eig  <- eigen(datamatrix$rho, only.values = TRUE)$values # Eigenvalues

# Summary table with interpretation guidelines
tbl_adeq <- tibble::tibble(
  Metric = c("KMO (global)", "KMO (minimum MSA)", "Bartlett χ²", "Bartlett df", "Bartlett p-value",
             "Determinant", "Minimum eigenvalue"),
  Value  = c(kmo$MSA, min(kmo$MSAi, na.rm = TRUE), unname(bart$chisq),
             bart$df, bart$p.value, det_r, min(eig)),
  `Guideline (interpretive)` = c(
    "≥ .60 acceptable; ≥ .80 meritorious",
    "≥ .50 acceptable (review items < .50)",
    "",
    "",
    "p < .05 supports factorability",
    "< 1e-5 suggests multicollinearity",
    "> 0 (positive-definite matrix)"
  )
) |>
  mutate(Value = round(Value, 4))

knitr::kable(tbl_adeq, caption = "#. Sampling adequacy and factorability tests") |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
  1. Sampling adequacy and factorability tests
Metric Value Guideline (interpretive)
KMO (global) 0.8805 ≥ .60 acceptable; ≥ .80 meritorious
KMO (minimum MSA) 0.7980 ≥ .50 acceptable (review items < .50)
Bartlett χ² 1853.8011
Bartlett df 45.0000
Bartlett p-value 0.0000 p < .05 supports factorability
Determinant 0.0004 < 1e-5 suggests multicollinearity
Minimum eigenvalue 0.1047 > 0 (positive-definite matrix)

Estimates the number of factors to retain using n_factors on the polychoric matrix (FA / minres, no rotation). The plot shows parallel analysis and related criteria (e.g., optimal coordinates, acceleration factor) against the empirical eigenvalues—use convergent evidence from these rules to pick the factor count.

#n factors
results_nfactor <- n_factors(DRA_df, type = "FA", rotation = "none",
  algorithm = "minres", package = "all",
  cor = datamatrix$rho, safe = TRUE, n_max = NULL)
results_nfactor

Prints the full results object from n_factors so you can read off each decision rule (e.g., PA, OC, AF) and the suggested number of factors per criterion.

data.frame("n_Factors"=results_nfactor$n_Factors,
"Method"=results_nfactor$Method
)

Fits the EFA model with 1 factor using weighted least squares (fm = "wls") on polychoric correlations and no rotation (since one factor). WTM_factor$Vaccounted reports variance explained: SS loadings, proportion, and cumulative proportion for the factor solution.

#Exploratory Factorial Analysis ASI
WTM_factor<-fa(DRA_df,nfactors = 1,fm = "wls",rotate ="none",cor = "poly")
WTM_factor$Vaccounted
##                     WLS1
## SS loadings    5.7036818
## Proportion Var 0.5703682

Builds a clean loadings table for the first factor and adds:

  • h2 (communality): variance in each item explained by the factor.
  • u2 (uniqueness): residual item variance (1 − h2).
  • com (complexity): how many factors are needed to account for an item (≈1 in a clear one-factor solution).

Values are rounded to three decimals and displayed with item names as row labels.

# Extraer matriz de cargas
L <- as.data.frame(unclass(WTM_factor$loadings))

# Tomar la 1ª columna de cargas (p.ej., "WLS1")
fact1 <- L[, 1, drop = FALSE]
colnames(fact1) <- colnames(L)[1]

# Comunualidades (h2), unicidades (u2) y complejidad (com)
h2 <- if (!is.null(WTM_factor$communality)) {
  WTM_factor$communality
} else if (!is.null(WTM_factor$communalities)) {
  WTM_factor$communalities
} else {
  rowSums(L^2)  # respaldo
}

u2 <- if (!is.null(WTM_factor$uniquenesses)) WTM_factor$uniquenesses else 1 - h2
com <- if (!is.null(WTM_factor$complexity))  WTM_factor$complexity  else rep(NA_real_, nrow(L))

# Armar y redondear tabla final
tab <- cbind(fact1, h2 = h2, u2 = u2, com = com)
tab <- round(tab, 3)

# Mostrar
knitr::kable(tab, caption = "Cargas (Factor 1), h2, u2 y complejidad (com)", align = "r") |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
Cargas (Factor 1), h2, u2 y complejidad (com)
WLS1 h2 u2 com
REL1 0.819 0.671 0.329 1
REL2 0.877 0.769 0.231 1
REL3 0.760 0.577 0.423 1
REL4 0.795 0.633 0.367 1
ACC 0.871 0.759 0.241 1
APP 0.834 0.696 0.304 1
PRAC1 0.612 0.375 0.625 1
PRAC2 0.667 0.444 0.556 1
EFF 0.484 0.234 0.766 1
VAL 0.738 0.545 0.455 1

Draws the factor diagram for the fitted EFA, showing the single latent factor, item loadings, and residuals. Use it for a quick visual check of loading magnitudes and model structure.

fa.diagram(WTM_factor)

Fits the EFA model with 2 factor using weighted least squares (fm = "wls") on polychoric correlations and no rotation (since one factor). WTM_factor$Vaccounted reports variance explained: SS loadings, proportion, and cumulative proportion for the factor solution.

#Exploratory Factorial Analysis ASI
WTM_factor<-fa(DRA_df,nfactors = 2,fm = "wls",rotate ="promax",cor = "poly")
WTM_factor$Vaccounted
##                            WLS1      WLS2
## SS loadings           4.8700646 1.6450061
## Proportion Var        0.4870065 0.1645006
## Cumulative Var        0.4870065 0.6515071
## Proportion Explained  0.7475076 0.2524924
## Cumulative Proportion 0.7475076 1.0000000

Builds a clean loadings table for the two factor and adds:

  • h2 (communality): variance in each item explained by the factor.
  • u2 (uniqueness): residual item variance (1 − h2).
  • com (complexity): how many factors are needed to account for an item (≈1 in a clear one-factor solution).

Values are rounded to three decimals and displayed with item names as row labels.

# Extract full loading matrix as a data.frame
L <- as.data.frame(unclass(WTM_factor$loadings))

# If loadings do not have clear column names, assign "F1", "F2", ...
if (is.null(colnames(L)) || any(colnames(L) == ""))
  colnames(L) <- paste0("F", seq_len(ncol(L)))

# --- Communalities (h2), uniquenesses (u2), and complexity (com) ---
# (robust to different object structures across package versions)
h2 <- if (!is.null(WTM_factor$h2)) {
  WTM_factor$h2
} else if (!is.null(WTM_factor$communality)) {
  WTM_factor$communality
} else if (!is.null(WTM_factor$communalities)) {
  WTM_factor$communalities
} else {
  rowSums(as.matrix(L)^2, na.rm = TRUE)
}

u2 <- if (!is.null(WTM_factor$u2)) {
  WTM_factor$u2
} else if (!is.null(WTM_factor$uniquenesses)) {
  WTM_factor$uniquenesses
} else {
  1 - h2
}

com <- if (!is.null(WTM_factor$complexity)) WTM_factor$complexity else rep(NA_real_, nrow(L))

# Align by item name, just in case
aligner <- rownames(L)
h2 <- h2[aligner]
u2 <- u2[aligner]
if (!is.null(names(com))) com <- com[aligner]

# Assign each item to the factor with the largest absolute loading
which_fac <- apply(abs(as.matrix(L)), 1, function(x) colnames(L)[which.max(x)])
max_abs   <- apply(abs(as.matrix(L)), 1, max)

# Final table
tab <- cbind(L, h2 = h2, u2 = u2, com = com,
             Assigned = which_fac, MaxAbs = max_abs)

# Order table by assigned factor and loading magnitude
tab <- tab[order(tab[,"Assigned"], -tab[,"MaxAbs"]), , drop = FALSE]

# Rounding
tab[, colnames(L)] <- round(tab[, colnames(L), drop = FALSE], 3)
tab[, c("h2","u2","com","MaxAbs")] <- round(tab[, c("h2","u2","com","MaxAbs")], 3)

# Display
knitr::kable(
  tab,
  caption = "Factor loadings (all factors), communalities (h2), uniquenesses (u2), and complexity",
  align = "r"
) |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
Factor loadings (all factors), communalities (h2), uniquenesses (u2), and complexity
WLS1 WLS2 h2 u2 com Assigned MaxAbs
REL1 0.982 -0.182 0.757 0.243 1.069 WLS1 0.982
APP 0.953 -0.131 0.757 0.243 1.038 WLS1 0.953
REL2 0.948 -0.072 0.813 0.187 1.012 WLS1 0.948
ACC 0.779 0.125 0.754 0.246 1.052 WLS1 0.779
REL4 0.751 0.066 0.635 0.365 1.015 WLS1 0.751
REL3 0.655 0.137 0.568 0.432 1.088 WLS1 0.655
PRAC1 0.446 0.216 0.375 0.625 1.446 WLS1 0.446
PRAC2 0.437 0.294 0.450 0.550 1.754 WLS1 0.437
EFF -0.209 0.929 0.645 0.355 1.101 WLS2 0.929
VAL 0.197 0.727 0.760 0.240 1.147 WLS2 0.727

Draws the factor diagram for the fitted EFA, showing the single latent factor, item loadings, and residuals. Use it for a quick visual check of loading magnitudes and model structure.

fa.diagram(WTM_factor)

3.3. Diskussionsforum Sample

Selects the ten GIAF items from Diskussionsforum_df, computes a polychoric correlation matrix (appropriate for Likert/ordinal items), and visualizes the correlations with corrplot (numbers shown in the cells). Use this to spot very low or very high inter-item associations and potential redundancy.

datamatrix <- polychoric(Diskussionsforum_df)
corrplot(datamatrix$rho, method="number")

Assesses whether the data are suitable for EFA using the polychoric matrix:

  • KMO / MSA: overall and item-level sampling adequacy (rules of thumb: ≥ .60 acceptable; ≥ .80 meritorious).
  • Bartlett’s test: a significant p-value supports factorability (correlation matrix differs from identity).
  • Determinant & eigenvalues: very small determinant (e.g., < 1e-5) or non-positive eigenvalues can indicate multicollinearity or a non–positive definite matrix.

The table summarizes values and interpretation guidelines.

# --- Sampling adequacy and factorability ---
kmo <- psych::KMO(datamatrix$rho)                       # Global KMO and item-level MSA
bart <- psych::cortest.bartlett(datamatrix$rho, 
                                 n = nrow(Diskussionsforum_df))        # Bartlett's test of sphericity
det_r <- det(datamatrix$rho)                            # Determinant of the correlation matrix
eig  <- eigen(datamatrix$rho, only.values = TRUE)$values # Eigenvalues

# Summary table with interpretation guidelines
tbl_adeq <- tibble::tibble(
  Metric = c("KMO (global)", "KMO (minimum MSA)", "Bartlett χ²", "Bartlett df", "Bartlett p-value",
             "Determinant", "Minimum eigenvalue"),
  Value  = c(kmo$MSA, min(kmo$MSAi, na.rm = TRUE), unname(bart$chisq),
             bart$df, bart$p.value, det_r, min(eig)),
  `Guideline (interpretive)` = c(
    "≥ .60 acceptable; ≥ .80 meritorious",
    "≥ .50 acceptable (review items < .50)",
    "",
    "",
    "p < .05 supports factorability",
    "< 1e-5 suggests multicollinearity",
    "> 0 (positive-definite matrix)"
  )
) |>
  mutate(Value = round(Value, 4))

knitr::kable(tbl_adeq, caption = "#. Sampling adequacy and factorability tests") |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
  1. Sampling adequacy and factorability tests
Metric Value Guideline (interpretive)
KMO (global) 0.9011 ≥ .60 acceptable; ≥ .80 meritorious
KMO (minimum MSA) 0.8485 ≥ .50 acceptable (review items < .50)
Bartlett χ² 1793.0057
Bartlett df 45.0000
Bartlett p-value 0.0000 p < .05 supports factorability
Determinant 0.0007 < 1e-5 suggests multicollinearity
Minimum eigenvalue 0.1374 > 0 (positive-definite matrix)

Estimates the number of factors to retain using n_factors on the polychoric matrix (FA / minres, no rotation). The plot shows parallel analysis and related criteria (e.g., optimal coordinates, acceleration factor) against the empirical eigenvalues—use convergent evidence from these rules to pick the factor count.

#n factors
results_nfactor <- n_factors(Diskussionsforum_df, type = "FA", rotation = "none",
  algorithm = "minres", package = "all",
  cor = datamatrix$rho, safe = TRUE, n_max = NULL)
## Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An
## ultra-Heywood case was detected.  Examine the results carefully

## Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An
## ultra-Heywood case was detected.  Examine the results carefully

## Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An
## ultra-Heywood case was detected.  Examine the results carefully
results_nfactor

Prints the full results object from n_factors so you can read off each decision rule (e.g., PA, OC, AF) and the suggested number of factors per criterion.

data.frame("n_Factors"=results_nfactor$n_Factors,
"Method"=results_nfactor$Method
)

Fits the EFA model with 1 factor using weighted least squares (fm = "wls") on polychoric correlations and no rotation (since one factor). WTM_factor$Vaccounted reports variance explained: SS loadings, proportion, and cumulative proportion for the factor solution.

#Exploratory Factorial Analysis ASI
WTM_factor<-fa(Diskussionsforum_df,nfactors = 1,fm = "wls",rotate ="none",cor = "poly")
WTM_factor$Vaccounted
##                     WLS1
## SS loadings    5.7418485
## Proportion Var 0.5741849

Builds a clean loadings table for the first factor and adds:

  • h2 (communality): variance in each item explained by the factor.
  • u2 (uniqueness): residual item variance (1 − h2).
  • com (complexity): how many factors are needed to account for an item (≈1 in a clear one-factor solution).

Values are rounded to three decimals and displayed with item names as row labels.

# Extraer matriz de cargas
L <- as.data.frame(unclass(WTM_factor$loadings))

# Tomar la 1ª columna de cargas (p.ej., "WLS1")
fact1 <- L[, 1, drop = FALSE]
colnames(fact1) <- colnames(L)[1]

# Comunualidades (h2), unicidades (u2) y complejidad (com)
h2 <- if (!is.null(WTM_factor$communality)) {
  WTM_factor$communality
} else if (!is.null(WTM_factor$communalities)) {
  WTM_factor$communalities
} else {
  rowSums(L^2)  # respaldo
}

u2 <- if (!is.null(WTM_factor$uniquenesses)) WTM_factor$uniquenesses else 1 - h2
com <- if (!is.null(WTM_factor$complexity))  WTM_factor$complexity  else rep(NA_real_, nrow(L))

# Armar y redondear tabla final
tab <- cbind(fact1, h2 = h2, u2 = u2, com = com)
tab <- round(tab, 3)

# Mostrar
knitr::kable(tab, caption = "Cargas (Factor 1), h2, u2 y complejidad (com)", align = "r") |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
Cargas (Factor 1), h2, u2 y complejidad (com)
WLS1 h2 u2 com
REL1 0.793 0.629 0.371 1
REL2 0.752 0.566 0.434 1
REL3 0.727 0.529 0.471 1
REL4 0.737 0.544 0.456 1
ACC 0.789 0.623 0.377 1
APP 0.851 0.724 0.276 1
PRAC1 0.651 0.424 0.576 1
PRAC2 0.744 0.554 0.446 1
EFF 0.657 0.431 0.569 1
VAL 0.847 0.717 0.283 1

Diskussionsforumws the factor diagram for the fitted EFA, showing the single latent factor, item loadings, and residuals. Use it for a quick visual check of loading magnitudes and model structure.

fa.diagram(WTM_factor)

Fits the EFA model with 2 factor using weighted least squares (fm = "wls") on polychoric correlations and no rotation (since one factor). WTM_factor$Vaccounted reports variance explained: SS loadings, proportion, and cumulative proportion for the factor solution.

#Exploratory Factorial Analysis ASI
WTM_factor<-fa(Diskussionsforum_df,nfactors = 2,fm = "wls",rotate ="promax",cor = "poly")
WTM_factor$Vaccounted
##                            WLS1      WLS2
## SS loadings           4.0834123 2.3470150
## Proportion Var        0.4083412 0.2347015
## Cumulative Var        0.4083412 0.6430427
## Proportion Explained  0.6350141 0.3649859
## Cumulative Proportion 0.6350141 1.0000000

Builds a clean loadings table for the two factor and adds:

  • h2 (communality): variance in each item explained by the factor.
  • u2 (uniqueness): residual item variance (1 − h2).
  • com (complexity): how many factors are needed to account for an item (≈1 in a clear one-factor solution).

Values are rounded to three decimals and displayed with item names as row labels.

# Extract full loading matrix as a data.frame
L <- as.data.frame(unclass(WTM_factor$loadings))

# If loadings do not have clear column names, assign "F1", "F2", ...
if (is.null(colnames(L)) || any(colnames(L) == ""))
  colnames(L) <- paste0("F", seq_len(ncol(L)))

# --- Communalities (h2), uniquenesses (u2), and complexity (com) ---
# (robust to different object structures across package versions)
h2 <- if (!is.null(WTM_factor$h2)) {
  WTM_factor$h2
} else if (!is.null(WTM_factor$communality)) {
  WTM_factor$communality
} else if (!is.null(WTM_factor$communalities)) {
  WTM_factor$communalities
} else {
  rowSums(as.matrix(L)^2, na.rm = TRUE)
}

u2 <- if (!is.null(WTM_factor$u2)) {
  WTM_factor$u2
} else if (!is.null(WTM_factor$uniquenesses)) {
  WTM_factor$uniquenesses
} else {
  1 - h2
}

com <- if (!is.null(WTM_factor$complexity)) WTM_factor$complexity else rep(NA_real_, nrow(L))

# Align by item name, just in case
aligner <- rownames(L)
h2 <- h2[aligner]
u2 <- u2[aligner]
if (!is.null(names(com))) com <- com[aligner]

# Assign each item to the factor with the largest absolute loading
which_fac <- apply(abs(as.matrix(L)), 1, function(x) colnames(L)[which.max(x)])
max_abs   <- apply(abs(as.matrix(L)), 1, max)

# Final table
tab <- cbind(L, h2 = h2, u2 = u2, com = com,
             Assigned = which_fac, MaxAbs = max_abs)

# Order table by assigned factor and loading magnitude
tab <- tab[order(tab[,"Assigned"], -tab[,"MaxAbs"]), , drop = FALSE]

# Rounding
tab[, colnames(L)] <- round(tab[, colnames(L), drop = FALSE], 3)
tab[, c("h2","u2","com","MaxAbs")] <- round(tab[, c("h2","u2","com","MaxAbs")], 3)

# Display
knitr::kable(
  tab,
  caption = "Factor loadings (all factors), communalities (h2), uniquenesses (u2), and complexity",
  align = "r"
) |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
Factor loadings (all factors), communalities (h2), uniquenesses (u2), and complexity
WLS1 WLS2 h2 u2 com Assigned MaxAbs
VAL 0.865 0.011 0.762 0.238 1.000 WLS1 0.865
PRAC1 0.802 -0.139 0.496 0.504 1.060 WLS1 0.802
PRAC2 0.784 -0.016 0.596 0.404 1.001 WLS1 0.784
APP 0.736 0.149 0.727 0.273 1.081 WLS1 0.736
EFF 0.658 0.021 0.454 0.546 1.002 WLS1 0.658
ACC 0.597 0.230 0.614 0.386 1.291 WLS1 0.597
REL3 0.511 0.256 0.522 0.478 1.474 WLS1 0.511
REL4 0.479 0.302 0.536 0.464 1.687 WLS1 0.479
REL1 -0.128 1.089 0.994 0.006 1.028 WLS2 1.089
REL2 0.043 0.821 0.728 0.272 1.006 WLS2 0.821

Diskussionsforumws the factor diagram for the fitted EFA, showing the single latent factor, item loadings, and residuals. Use it for a quick visual check of loading magnitudes and model structure.

fa.diagram(WTM_factor)

3.4. CMHA Sample

Selects the ten GIAF items from CMHA_df, computes a polychoric correlation matrix (appropriate for Likert/ordinal items), and visualizes the correlations with corrplot (numbers shown in the cells). Use this to spot very low or very high inter-item associations and potential redundancy.

datamatrix <- polychoric(CMHA_df)
corrplot(datamatrix$rho, method="number")

Assesses whether the data are suitable for EFA using the polychoric matrix:

  • KMO / MSA: overall and item-level sampling adequacy (rules of thumb: ≥ .60 acceptable; ≥ .80 meritorious).
  • Bartlett’s test: a significant p-value supports factorability (correlation matrix differs from identity).
  • Determinant & eigenvalues: very small determinant (e.g., < 1e-5) or non-positive eigenvalues can indicate multicollinearity or a non–positive definite matrix.

The table summarizes values and interpretation guidelines.

# --- Sampling adequacy and factorability ---
kmo <- psych::KMO(datamatrix$rho)                       # Global KMO and item-level MSA
bart <- psych::cortest.bartlett(datamatrix$rho, 
                                 n = nrow(CMHA_df))        # Bartlett's test of sphericity
det_r <- det(datamatrix$rho)                            # Determinant of the correlation matrix
eig  <- eigen(datamatrix$rho, only.values = TRUE)$values # Eigenvalues

# Summary table with interpretation guidelines
tbl_adeq <- tibble::tibble(
  Metric = c("KMO (global)", "KMO (minimum MSA)", "Bartlett χ²", "Bartlett df", "Bartlett p-value",
             "Determinant", "Minimum eigenvalue"),
  Value  = c(kmo$MSA, min(kmo$MSAi, na.rm = TRUE), unname(bart$chisq),
             bart$df, bart$p.value, det_r, min(eig)),
  `Guideline (interpretive)` = c(
    "≥ .60 acceptable; ≥ .80 meritorious",
    "≥ .50 acceptable (review items < .50)",
    "",
    "",
    "p < .05 supports factorability",
    "< 1e-5 suggests multicollinearity",
    "> 0 (positive-definite matrix)"
  )
) |>
  mutate(Value = round(Value, 4))

knitr::kable(tbl_adeq, caption = "#. Sampling adequacy and factorability tests") |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
  1. Sampling adequacy and factorability tests
Metric Value Guideline (interpretive)
KMO (global) 0.7737 ≥ .60 acceptable; ≥ .80 meritorious
KMO (minimum MSA) 0.5819 ≥ .50 acceptable (review items < .50)
Bartlett χ² 1259.9596
Bartlett df 45.0000
Bartlett p-value 0.0000 p < .05 supports factorability
Determinant 0.0000 < 1e-5 suggests multicollinearity
Minimum eigenvalue 0.0303 > 0 (positive-definite matrix)

Estimates the number of factors to retain using n_factors on the polychoric matrix (FA / minres, no rotation). The plot shows parallel analysis and related criteria (e.g., optimal coordinates, acceleration factor) against the empirical eigenvalues—use convergent evidence from these rules to pick the factor count.

#n factors
results_nfactor <- n_factors(CMHA_df, type = "FA", rotation = "none",
  algorithm = "minres", package = "all",
  cor = datamatrix$rho, safe = TRUE, n_max = NULL)
## Warning in fa.stats(r = r, f = f, phi = phi, n.obs = n.obs, np.obs = np.obs, :
## The estimated weights for the factor scores are probably incorrect.  Try a
## different factor score estimation method.
## Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An
## ultra-Heywood case was detected.  Examine the results carefully
## Warning in fa.stats(r = r, f = f, phi = phi, n.obs = n.obs, np.obs = np.obs, :
## The estimated weights for the factor scores are probably incorrect.  Try a
## different factor score estimation method.
## Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An
## ultra-Heywood case was detected.  Examine the results carefully

## Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An
## ultra-Heywood case was detected.  Examine the results carefully

## Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An
## ultra-Heywood case was detected.  Examine the results carefully
results_nfactor

Prints the full results object from n_factors so you can read off each decision rule (e.g., PA, OC, AF) and the suggested number of factors per criterion.

data.frame("n_Factors"=results_nfactor$n_Factors,
"Method"=results_nfactor$Method
)

Fits the EFA model with 1 factor using weighted least squares (fm = "wls") on polychoric correlations and no rotation (since one factor). WTM_factor$Vaccounted reports variance explained: SS loadings, proportion, and cumulative proportion for the factor solution.

#Exploratory Factorial Analysis ASI
WTM_factor<-fa(CMHA_df,nfactors = 1,fm = "wls",rotate ="none",cor = "poly")
WTM_factor$Vaccounted
##                     WLS1
## SS loadings    6.8275123
## Proportion Var 0.6827512

Builds a clean loadings table for the first factor and adds:

  • h2 (communality): variance in each item explained by the factor.
  • u2 (uniqueness): residual item variance (1 − h2).
  • com (complexity): how many factors are needed to account for an item (≈1 in a clear one-factor solution).

Values are rounded to three decimals and displayed with item names as row labels.

# Extraer matriz de cargas
L <- as.data.frame(unclass(WTM_factor$loadings))

# Tomar la 1ª columna de cargas (p.ej., "WLS1")
fact1 <- L[, 1, drop = FALSE]
colnames(fact1) <- colnames(L)[1]

# Comunualidades (h2), unicidades (u2) y complejidad (com)
h2 <- if (!is.null(WTM_factor$communality)) {
  WTM_factor$communality
} else if (!is.null(WTM_factor$communalities)) {
  WTM_factor$communalities
} else {
  rowSums(L^2)  # respaldo
}

u2 <- if (!is.null(WTM_factor$uniquenesses)) WTM_factor$uniquenesses else 1 - h2
com <- if (!is.null(WTM_factor$complexity))  WTM_factor$complexity  else rep(NA_real_, nrow(L))

# Armar y redondear tabla final
tab <- cbind(fact1, h2 = h2, u2 = u2, com = com)
tab <- round(tab, 3)

# Mostrar
knitr::kable(tab, caption = "Cargas (Factor 1), h2, u2 y complejidad (com)", align = "r") |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
Cargas (Factor 1), h2, u2 y complejidad (com)
WLS1 h2 u2 com
REL1 0.842 0.709 0.291 1
REL2 0.779 0.607 0.393 1
REL3 0.739 0.546 0.454 1
REL4 0.874 0.764 0.236 1
ACC 0.912 0.832 0.168 1
APP 0.909 0.826 0.174 1
PRAC1 0.891 0.795 0.205 1
PRAC2 0.832 0.693 0.307 1
EFF 0.706 0.499 0.501 1
VAL 0.747 0.559 0.441 1

CMHAws the factor diagram for the fitted EFA, showing the single latent factor, item loadings, and residuals. Use it for a quick visual check of loading magnitudes and model structure.

fa.diagram(WTM_factor)

Fits the EFA model with 2 factor using weighted least squares (fm = "wls") on polychoric correlations and no rotation (since one factor). WTM_factor$Vaccounted reports variance explained: SS loadings, proportion, and cumulative proportion for the factor solution.

#Exploratory Factorial Analysis ASI
WTM_factor<-fa(CMHA_df,nfactors = 2,fm = "wls",rotate ="promax",cor = "poly")
WTM_factor$Vaccounted
##                            WLS1      WLS2
## SS loadings           4.0461459 3.5815830
## Proportion Var        0.4046146 0.3581583
## Cumulative Var        0.4046146 0.7627729
## Proportion Explained  0.5304522 0.4695478
## Cumulative Proportion 0.5304522 1.0000000

Builds a clean loadings table for the two factor and adds:

  • h2 (communality): variance in each item explained by the factor.
  • u2 (uniqueness): residual item variance (1 − h2).
  • com (complexity): how many factors are needed to account for an item (≈1 in a clear one-factor solution).

Values are rounded to three decimals and displayed with item names as row labels.

# Extract full loading matrix as a data.frame
L <- as.data.frame(unclass(WTM_factor$loadings))

# If loadings do not have clear column names, assign "F1", "F2", ...
if (is.null(colnames(L)) || any(colnames(L) == ""))
  colnames(L) <- paste0("F", seq_len(ncol(L)))

# --- Communalities (h2), uniquenesses (u2), and complexity (com) ---
# (robust to different object structures across package versions)
h2 <- if (!is.null(WTM_factor$h2)) {
  WTM_factor$h2
} else if (!is.null(WTM_factor$communality)) {
  WTM_factor$communality
} else if (!is.null(WTM_factor$communalities)) {
  WTM_factor$communalities
} else {
  rowSums(as.matrix(L)^2, na.rm = TRUE)
}

u2 <- if (!is.null(WTM_factor$u2)) {
  WTM_factor$u2
} else if (!is.null(WTM_factor$uniquenesses)) {
  WTM_factor$uniquenesses
} else {
  1 - h2
}

com <- if (!is.null(WTM_factor$complexity)) WTM_factor$complexity else rep(NA_real_, nrow(L))

# Align by item name, just in case
aligner <- rownames(L)
h2 <- h2[aligner]
u2 <- u2[aligner]
if (!is.null(names(com))) com <- com[aligner]

# Assign each item to the factor with the largest absolute loading
which_fac <- apply(abs(as.matrix(L)), 1, function(x) colnames(L)[which.max(x)])
max_abs   <- apply(abs(as.matrix(L)), 1, max)

# Final table
tab <- cbind(L, h2 = h2, u2 = u2, com = com,
             Assigned = which_fac, MaxAbs = max_abs)

# Order table by assigned factor and loading magnitude
tab <- tab[order(tab[,"Assigned"], -tab[,"MaxAbs"]), , drop = FALSE]

# Rounding
tab[, colnames(L)] <- round(tab[, colnames(L), drop = FALSE], 3)
tab[, c("h2","u2","com","MaxAbs")] <- round(tab[, c("h2","u2","com","MaxAbs")], 3)

# Display
knitr::kable(
  tab,
  caption = "Factor loadings (all factors), communalities (h2), uniquenesses (u2), and complexity",
  align = "r"
) |>
  kableExtra::kable_styling(full_width = FALSE, position = "center")
Factor loadings (all factors), communalities (h2), uniquenesses (u2), and complexity
WLS1 WLS2 h2 u2 com Assigned MaxAbs
REL1 1.016 -0.108 0.880 0.120 1.023 WLS1 1.016
REL2 0.985 -0.148 0.773 0.227 1.045 WLS1 0.985
REL4 0.722 0.209 0.791 0.209 1.167 WLS1 0.722
ACC 0.715 0.254 0.850 0.150 1.249 WLS1 0.715
APP 0.656 0.317 0.844 0.156 1.442 WLS1 0.656
EFF -0.153 0.926 0.667 0.333 1.055 WLS2 0.926
VAL -0.084 0.899 0.702 0.298 1.018 WLS2 0.899
PRAC2 0.193 0.707 0.742 0.258 1.149 WLS2 0.707
PRAC1 0.289 0.670 0.822 0.178 1.359 WLS2 0.670
REL3 0.203 0.582 0.557 0.443 1.239 WLS2 0.582

CMHAws the factor diagram for the fitted EFA, showing the single latent factor, item loadings, and residuals. Use it for a quick visual check of loading magnitudes and model structure.

fa.diagram(WTM_factor)

4. CONFIRMATORY FACTOR ANALYSIS (CFA)

4.1. Total Sample CFA

Runs a one-factor CFA on the same item set using a battery of estimators for ordinal and continuous assumptions, extracts standard fit indices, and compiles everything into a publication-ready table.

  • Data & model: selects items defines a single-factor model.
  • Estimators tested:
    • Ordinal case (ordered = names(giaf_df)): ULS, ULSM, ULSMV, DWLS, WLSM, WLSMV, WLSMVS.
    • Continuous case (no ordered argument): ML, MLM, MLR, MLMV, GLS.
  • Fitting function (ajustar_extraer):
    • Fits the CFA with the requested estimator and data type (ordinal vs. continuous).
    • Silences warnings and safely returns NULL on errors (so the loop continues).
    • Pulls key fit measures via fitMeasures: \(\chi^2\), df, p, CFI, TLI, NNFI, RMSEA (and 90% CI), SRMR.
    • Returns a tidy row with the Model, Type (ordinal/continuous), Estimator, and indices.
  • Output table: binds rows from all estimators, orders by Type then Estimator, rounds metrics, and renders a kable with a caption for easy comparison.

How to read the table: for each estimator, check CFI/TLI (higher is better), RMSEA/SRMR (lower is better), and the \(\chi^2\)/df/p for completeness. This lets you compare the model’s behavior under ordinal vs. continuous assumptions and across estimators commonly used in practice.

WTM2 <- giaf_df

Onefactor <- '
  GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP + PRAC1 + PRAC2 + EFF + VAL
'

estimadores_ordinal   <- c("ULS","ULSM","ULSMV","DWLS","WLSM","WLSMV","WLSMVS")
estimadores_continuos <- c("ML","MLM","MLR","MLMV","GLS")

# IMPORTANT: specify only the ordinal item variables (avoid non-items like "project")
items <- c("REL1","REL2","REL3","REL4","ACC","APP","PRAC1","PRAC2","EFF","VAL")

ajustar_extraer <- function(est, tipo = c("ordinal","continuous")) {
  tipo <- match.arg(tipo)

  fit <- tryCatch(
    suppressWarnings({
      if (tipo == "ordinal") {
        cfa(Onefactor, data = WTM2, estimator = est, ordered = items)
      } else {
        cfa(Onefactor, data = WTM2, estimator = est)
      }
    }),
    error = function(e) NULL
  )

  if (is.null(fit)) return(NULL)

  nm <- c("chisq","df","pvalue","cfi","tli","rmsea",
          "rmsea.ci.lower","rmsea.ci.upper","srmr")

  fm <- tryCatch(
    fitMeasures(fit, fit.measures = nm),
    error = function(e) setNames(rep(NA_real_, length(nm)), nm)
  )

  tibble(
    Model = "One Factor",
    Type = tipo,
    Estimator = est,
    `χ²`  = round(as.numeric(fm["chisq"]), 2),
    df    = as.integer(fm["df"]),
    p     = format.pval(as.numeric(fm["pvalue"]), digits = 3, eps = 1e-3),
    CFI   = round(as.numeric(fm["cfi"]), 3),
    TLI   = round(as.numeric(fm["tli"]), 3),
    RMSEA = round(as.numeric(fm["rmsea"]), 3),
    `RMSEA 90% CI` = paste0("[",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.lower"])), ", ",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.upper"])), "]"),
    SRMR  = round(as.numeric(fm["srmr"]), 3)
  )
}

tab_ord <- map(estimadores_ordinal, ajustar_extraer, tipo = "ordinal") |>
  compact() |>
  bind_rows()

tab_con <- map(estimadores_continuos, ajustar_extraer, tipo = "continuous") |>
  compact() |>
  bind_rows()

tab_cfa <- bind_rows(tab_ord, tab_con) |>
  arrange(Type, Estimator)

kable(
  tab_cfa,
  caption = "Fit indices by estimator (1-factor CFA): ordinal and continuous",
  align = c("l","l","l","r","r","r","r","r","l","r")  # 10 columns => 10 aligns
) |>
  kable_styling(
    full_width = FALSE,
    position = "center",
    bootstrap_options = c("striped","hover")
  )
Fit indices by estimator (1-factor CFA): ordinal and continuous
Model Type Estimator χ² df p CFI TLI RMSEA RMSEA 90% CI SRMR
One Factor continuous GLS 290.56 35 <0.001 0.496 0.352 0.111 [0.099, 0.122] 0.094
One Factor continuous ML 526.15 35 <0.001 0.840 0.795 0.153 [0.142, 0.165] 0.070
One Factor continuous MLM 526.15 35 <0.001 0.840 0.795 0.153 [0.142, 0.165] 0.070
One Factor continuous MLMV 526.15 35 <0.001 0.840 0.795 0.153 [0.142, 0.165] 0.070
One Factor continuous MLR 526.15 35 <0.001 0.840 0.795 0.153 [0.142, 0.165] 0.070
One Factor ordinal DWLS 513.64 35 <0.001 0.981 0.975 0.151 [0.140, 0.163] 0.084
One Factor ordinal ULS 168.46 35 NA 0.984 0.980 0.080 [0.068, 0.092] 0.072
One Factor ordinal ULSM 168.46 35 NA 0.984 0.980 0.080 [0.068, 0.092] 0.072
One Factor ordinal ULSMV 168.46 35 NA 0.984 0.980 0.080 [0.068, 0.092] 0.072
One Factor ordinal WLSM 513.64 35 <0.001 0.981 0.975 0.151 [0.140, 0.163] 0.084
One Factor ordinal WLSMV 513.64 35 <0.001 0.981 0.975 0.151 [0.140, 0.163] 0.084
One Factor ordinal WLSMVS 513.64 35 <0.001 0.981 0.975 0.151 [0.140, 0.163] 0.084
WTM2 <- giaf_df

#One factor solution
Onefactor <- '
  GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP + PRAC1 + PRAC2 + EFF + VAL
'

#CFA execution
CFAone <- cfa(Onefactor,orthogonal=FALSE, data=WTM2, estimator="ULS",ordered =names(WTM2))
resumen_CFAone <- fitMeasures(CFAone)
resumen_CFAone
##                  npar                  fmin                 chisq 
##                69.000                 0.141               168.456 
##                    df                pvalue        baseline.chisq 
##                35.000                    NA              8635.770 
##           baseline.df       baseline.pvalue                   cfi 
##                45.000                    NA                 0.984 
##                   tli                  nnfi                   rfi 
##                 0.980                 0.980                 0.975 
##                   nfi                  pnfi                   ifi 
##                 0.980                 0.763                 0.984 
##                   rni                 rmsea        rmsea.ci.lower 
##                 0.984                 0.080                 0.068 
##        rmsea.ci.upper        rmsea.ci.level          rmsea.pvalue 
##                 0.092                 0.900                 0.000 
##        rmsea.close.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
##                 0.050                 0.508                 0.080 
##                   rmr            rmr_nomean                  srmr 
##                 0.050                 0.072                 0.072 
##          srmr_bentler   srmr_bentler_nomean                  crmr 
##                 0.050                 0.072                 0.052 
##           crmr_nomean            srmr_mplus     srmr_mplus_nomean 
##                 0.079                    NA                    NA 
##                 cn_05                 cn_01                   gfi 
##               177.791               204.558                 0.999 
##                  agfi                  pgfi                   mfi 
##                 0.996                 0.336                 0.894 
##                  wrmr 
##                 1.273

Draws a path diagram of the fitted CFA model (CFAone) using semPaths. It hides intercepts, displays standardized loadings/paths, uses a circular layout of indicators around the latent factor, and applies readable sizing/colors so you can quickly inspect the model structure and parameter magnitudes.

semPaths(CFAone, intercepts = FALSE,edge.label.cex=1, optimizeLatRes = TRUE, groups = "lat",pastel = TRUE, exoVar = FALSE, sizeInt=3,edge.color ="black",esize = 3, label.prop=1,sizeLat = 8,"std", layout="circle2")

Extracts the main fit indices from resumen_CFAone (χ², df, p, CFI, TLI, NNFI, RMSEA with 90% CI, SRMR), rounds them, and renders a clean publication-ready table with kable. This provides a concise summary of absolute fit for the one-factor CFA under the chosen estimator.

# Extract values (make sure 'resumen_CFAone' contains these names)
nm <- c("chisq","df","pvalue","cfi","tli","nnfi","rmsea",
        "rmsea.ci.lower","rmsea.ci.upper","srmr")

# Keep names and coerce safely (missing names become NA)
vals <- as.numeric(resumen_CFAone[nm])

# Build table
tab_cfa <- tibble(
  Model = "One Factor",
  `χ²`  = round(vals[1], 2),
  df    = as.integer(vals[2]),
  p     = format.pval(vals[3], digits = 3, eps = 1e-3),
  CFI   = round(vals[4], 3),
  TLI   = round(vals[5], 3),
  NNFI  = if (!is.na(vals[6])) round(vals[6], 3) else NA_real_,
  RMSEA = round(vals[7], 3),
  `RMSEA 90% CI` = paste0("[",
                          sprintf("%.3f", vals[8]), ", ",
                          sprintf("%.3f", vals[9]), "]"),
  SRMR  = round(vals[10], 3)
)

# Show table
kable(
  tab_cfa,
  caption = "Table. Model fit indices for the CFA model (Two Factor)",
  align = c("l","r","r","r","r","r","r","r","l","r")
) |>
  kable_styling(full_width = FALSE, position = "center")
Table. Model fit indices for the CFA model (Two Factor)
Model χ² df p CFI TLI NNFI RMSEA RMSEA 90% CI SRMR
One Factor 168.46 35 NA 0.984 0.98 0.98 0.08 [0.068, 0.092] 0.072
# --- CFA 1 factor probando estimadores para datos ordinales y continuos ---
WTM2 <- giaf_df

Twofactor <- 'GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP
GIAF2 =~ PRAC1 + PRAC2 + EFF + VAL'

estimadores_ordinal   <- c("ULS","ULSM","ULSMV","DWLS","WLSM","WLSMV","WLSMVS")
estimadores_continuos <- c("ML","MLM","MLR","MLMV","GLS")

ajustar_extraer <- function(est, tipo = c("ordinal","continuous")) {
  tipo <- match.arg(tipo)

  # Ajuste del modelo silenciando warnings
  fit <- tryCatch(
    {
      suppressWarnings(
        if (tipo == "ordinal") {
          cfa(Twofactor, data = WTM2, estimator = est, ordered = names(WTM2))
        } else {
          cfa(Twofactor, data = WTM2, estimator = est)
        }
      )
    },
    error = function(e) NULL
  )

  if (is.null(fit)) return(NULL)

  nm <- c("chisq","df","pvalue","cfi","tli","rmsea",
          "rmsea.ci.lower","rmsea.ci.upper","srmr")

  # fitMeasures puede no devolver alguno (p.ej., SRMR en ciertos estimadores)
  fm <- tryCatch(
    fitMeasures(fit, fit.measures = nm),
    error = function(e) setNames(rep(NA_real_, length(nm)), nm)
  )

  tibble(
    Model = "One Factor",
    Type = tipo,
    Estimator = est,
    `χ²`  = round(as.numeric(fm["chisq"]), 2),
    df    = as.integer(fm["df"]),
    p     = format.pval(as.numeric(fm["pvalue"]), digits = 3, eps = 1e-3),
    CFI   = round(as.numeric(fm["cfi"]), 3),
    TLI   = round(as.numeric(fm["tli"]), 3),
    RMSEA = round(as.numeric(fm["rmsea"]), 3),
    `RMSEA 90% CI` = paste0("[",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.lower"])), ", ",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.upper"])), "]"),
    SRMR  = round(as.numeric(fm["srmr"]), 3)
  )
}

tab_ord <- map(estimadores_ordinal, ajustar_extraer, tipo = "ordinal")    |> compact() |> bind_rows()
tab_con <- map(estimadores_continuos, ajustar_extraer, tipo = "continuous") |> compact() |> bind_rows()

tab_cfa <- bind_rows(tab_ord, tab_con) |> arrange(Type, Estimator)

kable(
  tab_cfa,
  caption = "Índices de ajuste por estimador (CFA 1 factor): ordinal y continuo",
  align = c("l","l","l","r","r","r","r","l","r")
) |>
  kable_styling(full_width = FALSE, position = "center",
                bootstrap_options = c("striped","hover"))
Índices de ajuste por estimador (CFA 1 factor): ordinal y continuo
Model Type Estimator χ² df p CFI TLI RMSEA RMSEA 90% CI SRMR
One Factor continuous GLS 234.02 34 <0.001 0.605 0.478 0.099 [0.087, 0.111] 0.080
One Factor continuous ML 370.97 34 <0.001 0.891 0.855 0.129 [0.117, 0.141] 0.059
One Factor continuous MLM 370.97 34 <0.001 0.891 0.855 0.129 [0.117, 0.141] 0.059
One Factor continuous MLMV 370.97 34 <0.001 0.891 0.855 0.129 [0.117, 0.141] 0.059
One Factor continuous MLR 370.97 34 <0.001 0.891 0.855 0.129 [0.117, 0.141] 0.059
One Factor ordinal DWLS 325.49 34 <0.001 0.988 0.985 0.120 [0.108, 0.132] 0.065
One Factor ordinal ULS 101.93 34 NA 0.992 0.990 0.058 [0.045, 0.071] 0.056
One Factor ordinal ULSM 101.93 34 NA 0.992 0.990 0.058 [0.045, 0.071] 0.056
One Factor ordinal ULSMV 101.93 34 NA 0.992 0.990 0.058 [0.045, 0.071] 0.056
One Factor ordinal WLSM 325.49 34 <0.001 0.988 0.985 0.120 [0.108, 0.132] 0.065
One Factor ordinal WLSMV 325.49 34 <0.001 0.988 0.985 0.120 [0.108, 0.132] 0.065
One Factor ordinal WLSMVS 325.49 34 <0.001 0.988 0.985 0.120 [0.108, 0.132] 0.065

Specifies a single-factor model (giaf_df loading on all ten items), and fits a CFA with the ULS estimator while treating items as ordinal (ordered = names(giaf_df)). The object resumen_CFAone stores the key fit measures for later reporting.

WTM2 <- giaf_df

#One factor solution
Twofactor <- 'GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP
GIAF2 =~ PRAC1 + PRAC2 + EFF + VAL'

#CFA execution
CFAtwo <- cfa(Twofactor,orthogonal=FALSE, data=WTM2, estimator="ULS",ordered =names(WTM2))
resumen_CFtwo <- fitMeasures(CFAtwo)
resumen_CFtwo
##                  npar                  fmin                 chisq 
##                70.000                 0.085               101.930 
##                    df                pvalue        baseline.chisq 
##                34.000                    NA              8635.770 
##           baseline.df       baseline.pvalue                   cfi 
##                45.000                    NA                 0.992 
##                   tli                  nnfi                   rfi 
##                 0.990                 0.990                 0.984 
##                   nfi                  pnfi                   ifi 
##                 0.988                 0.747                 0.992 
##                   rni                 rmsea        rmsea.ci.lower 
##                 0.992                 0.058                 0.045 
##        rmsea.ci.upper        rmsea.ci.level          rmsea.pvalue 
##                 0.071                 0.900                 0.150 
##        rmsea.close.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
##                 0.050                 0.002                 0.080 
##                   rmr            rmr_nomean                  srmr 
##                 0.039                 0.056                 0.056 
##          srmr_bentler   srmr_bentler_nomean                  crmr 
##                 0.039                 0.056                 0.040 
##           crmr_nomean            srmr_mplus     srmr_mplus_nomean 
##                 0.062                    NA                    NA 
##                 cn_05                 cn_01                   gfi 
##               286.138               329.895                 0.999 
##                  agfi                  pgfi                   mfi 
##                 0.998                 0.327                 0.945 
##                  wrmr 
##                 0.990

Draws a path diagram of the fitted CFA model (CFAtwo) using semPaths. It hides intercepts, displays standardized loadings/paths, uses a circular layout of indicators around the latent factor, and applies readable sizing/colors so you can quickly inspect the model structure and parameter magnitudes.

semPaths(CFAtwo, intercepts = FALSE,edge.label.cex=1, optimizeLatRes = TRUE, groups = "lat",pastel = TRUE, exoVar = FALSE, sizeInt=3,edge.color ="black",esize = 3, label.prop=1,sizeLat = 8,"std", layout="circle2")

Extracts the main fit indices from resumen_CFtwo (χ², df, p, CFI, TLI, NNFI, RMSEA with 90% CI, SRMR), rounds them, and renders a clean publication-ready table with kable. This provides a concise summary of absolute fit for the one-factor CFA under the chosen estimator.

# Extract values (make sure 'resumen_CFtwo' contains these names)
nm <- c("chisq","df","pvalue","cfi","tli","nnfi","rmsea",
        "rmsea.ci.lower","rmsea.ci.upper","srmr")

# Keep names and coerce safely (missing names become NA)
vals <- as.numeric(resumen_CFtwo[nm])

# Build table
tab_cfa <- tibble(
  Model = "One Factor",
  `χ²`  = round(vals[1], 2),
  df    = as.integer(vals[2]),
  p     = format.pval(vals[3], digits = 3, eps = 1e-3),
  CFI   = round(vals[4], 3),
  TLI   = round(vals[5], 3),
  NNFI  = if (!is.na(vals[6])) round(vals[6], 3) else NA_real_,
  RMSEA = round(vals[7], 3),
  `RMSEA 90% CI` = paste0("[",
                          sprintf("%.3f", vals[8]), ", ",
                          sprintf("%.3f", vals[9]), "]"),
  SRMR  = round(vals[10], 3)
)

# Show table
kable(
  tab_cfa,
  caption = "Table. Model fit indices for the CFA model (One Factor)",
  align = c("l","r","r","r","r","r","r","r","l","r")
) |>
  kable_styling(full_width = FALSE, position = "center")
Table. Model fit indices for the CFA model (One Factor)
Model χ² df p CFI TLI NNFI RMSEA RMSEA 90% CI SRMR
One Factor 101.93 34 NA 0.992 0.99 0.99 0.058 [0.045, 0.071] 0.056

4.2. DRA Sample CFA

Runs a one-factor CFA on the same item set using a battery of estimators for ordinal and continuous assumptions, extracts standard fit indices, and compiles everything into a publication-ready table.

  • Data & model: selects items defines a single-factor model.
  • Estimators tested:
    • Ordinal case (ordered = names(giaf_df)): ULS, ULSM, ULSMV, DWLS, WLSM, WLSMV, WLSMVS.
    • Continuous case (no ordered argument): ML, MLM, MLR, MLMV, GLS.
  • Fitting function (ajustar_extraer):
    • Fits the CFA with the requested estimator and data type (ordinal vs. continuous).
    • Silences warnings and safely returns NULL on errors (so the loop continues).
    • Pulls key fit measures via fitMeasures: \(\chi^2\), df, p, CFI, TLI, NNFI, RMSEA (and 90% CI), SRMR.
    • Returns a tidy row with the Model, Type (ordinal/continuous), Estimator, and indices.
  • Output table: binds rows from all estimators, orders by Type then Estimator, rounds metrics, and renders a kable with a caption for easy comparison.

How to read the table: for each estimator, check CFI/TLI (higher is better), RMSEA/SRMR (lower is better), and the \(\chi^2\)/df/p for completeness. This lets you compare the model’s behavior under ordinal vs. continuous assumptions and across estimators commonly used in practice.

WTM2 <- DRA_df

Onefactor <- '
  GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP + PRAC1 + PRAC2 + EFF + VAL
'

estimadores_ordinal   <- c("ULS","ULSM","ULSMV","DWLS","WLSM","WLSMV","WLSMVS")
estimadores_continuos <- c("ML","MLM","MLR","MLMV","GLS")

# IMPORTANT: specify only the ordinal item variables (avoid non-items like "project")
items <- c("REL1","REL2","REL3","REL4","ACC","APP","PRAC1","PRAC2","EFF","VAL")

ajustar_extraer <- function(est, tipo = c("ordinal","continuous")) {
  tipo <- match.arg(tipo)

  fit <- tryCatch(
    suppressWarnings({
      if (tipo == "ordinal") {
        cfa(Onefactor, data = WTM2, estimator = est, ordered = items)
      } else {
        cfa(Onefactor, data = WTM2, estimator = est)
      }
    }),
    error = function(e) NULL
  )

  if (is.null(fit)) return(NULL)

  nm <- c("chisq","df","pvalue","cfi","tli","rmsea",
          "rmsea.ci.lower","rmsea.ci.upper","srmr")

  fm <- tryCatch(
    fitMeasures(fit, fit.measures = nm),
    error = function(e) setNames(rep(NA_real_, length(nm)), nm)
  )

  tibble(
    Model = "One Factor",
    Type = tipo,
    Estimator = est,
    `χ²`  = round(as.numeric(fm["chisq"]), 2),
    df    = as.integer(fm["df"]),
    p     = format.pval(as.numeric(fm["pvalue"]), digits = 3, eps = 1e-3),
    CFI   = round(as.numeric(fm["cfi"]), 3),
    TLI   = round(as.numeric(fm["tli"]), 3),
    RMSEA = round(as.numeric(fm["rmsea"]), 3),
    `RMSEA 90% CI` = paste0("[",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.lower"])), ", ",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.upper"])), "]"),
    SRMR  = round(as.numeric(fm["srmr"]), 3)
  )
}

tab_ord <- map(estimadores_ordinal, ajustar_extraer, tipo = "ordinal") |>
  compact() |>
  bind_rows()

tab_con <- map(estimadores_continuos, ajustar_extraer, tipo = "continuous") |>
  compact() |>
  bind_rows()

tab_cfa <- bind_rows(tab_ord, tab_con) |>
  arrange(Type, Estimator)

kable(
  tab_cfa,
  caption = "Fit indices by estimator (1-factor CFA): ordinal and continuous",
  align = c("l","l","l","r","r","r","r","r","l","r")  # 10 columns => 10 aligns
) |>
  kable_styling(
    full_width = FALSE,
    position = "center",
    bootstrap_options = c("striped","hover")
  )
Fit indices by estimator (1-factor CFA): ordinal and continuous
Model Type Estimator χ² df p CFI TLI RMSEA RMSEA 90% CI SRMR
One Factor continuous GLS 149.75 35 <0.001 0.453 0.297 0.117 [0.098, 0.136] 0.099
One Factor continuous ML 225.55 35 <0.001 0.838 0.791 0.150 [0.132, 0.169] 0.073
One Factor continuous MLM 225.55 35 <0.001 0.838 0.791 0.150 [0.132, 0.169] 0.073
One Factor continuous MLMV 225.55 35 <0.001 0.838 0.791 0.150 [0.132, 0.169] 0.073
One Factor continuous MLR 225.55 35 <0.001 0.838 0.791 0.150 [0.132, 0.169] 0.073
One Factor ordinal DWLS 188.05 35 <0.001 0.988 0.984 0.135 [0.116, 0.154] 0.079
One Factor ordinal ULS 69.53 35 NA 0.990 0.987 0.064 [0.042, 0.086] 0.073
One Factor ordinal ULSM 69.53 35 NA 0.990 0.987 0.064 [0.042, 0.086] 0.073
One Factor ordinal ULSMV 69.53 35 NA 0.990 0.987 0.064 [0.042, 0.086] 0.073
One Factor ordinal WLSM 188.05 35 <0.001 0.988 0.984 0.135 [0.116, 0.154] 0.079
One Factor ordinal WLSMV 188.05 35 <0.001 0.988 0.984 0.135 [0.116, 0.154] 0.079
One Factor ordinal WLSMVS 188.05 35 <0.001 0.988 0.984 0.135 [0.116, 0.154] 0.079
WTM2 <- DRA_df

#One factor solution
Onefactor <- '
  GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP + PRAC1 + PRAC2 + EFF + VAL
'

#CFA execution
CFAone <- cfa(Onefactor,orthogonal=FALSE, data=WTM2, estimator="ULS",ordered =names(WTM2))
resumen_CFAone <- fitMeasures(CFAone)
resumen_CFAone
##                  npar                  fmin                 chisq 
##                65.000                 0.144                69.530 
##                    df                pvalue        baseline.chisq 
##                35.000                    NA              3559.575 
##           baseline.df       baseline.pvalue                   cfi 
##                45.000                    NA                 0.990 
##                   tli                  nnfi                   rfi 
##                 0.987                 0.987                 0.975 
##                   nfi                  pnfi                   ifi 
##                 0.980                 0.763                 0.990 
##                   rni                 rmsea        rmsea.ci.lower 
##                 0.990                 0.064                 0.042 
##        rmsea.ci.upper        rmsea.ci.level          rmsea.pvalue 
##                 0.086                 0.900                 0.139 
##        rmsea.close.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
##                 0.050                 0.123                 0.080 
##                   rmr            rmr_nomean                  srmr 
##                 0.051                 0.073                 0.073 
##          srmr_bentler   srmr_bentler_nomean                  crmr 
##                 0.051                 0.073                 0.054 
##           crmr_nomean            srmr_mplus     srmr_mplus_nomean 
##                 0.080                    NA                    NA 
##                 cn_05                 cn_01                   gfi 
##               172.904               198.931                 0.998 
##                  agfi                  pgfi                   mfi 
##                 0.995                 0.349                 0.931 
##                  wrmr 
##                 0.834

Draws a path diagram of the fitted CFA model (CFAone) using semPaths. It hides intercepts, displays standardized loadings/paths, uses a circular layout of indicators around the latent factor, and applies readable sizing/colors so you can quickly inspect the model structure and parameter magnitudes.

semPaths(CFAone, intercepts = FALSE,edge.label.cex=1, optimizeLatRes = TRUE, groups = "lat",pastel = TRUE, exoVar = FALSE, sizeInt=3,edge.color ="black",esize = 3, label.prop=1,sizeLat = 8,"std", layout="circle2")

Extracts the main fit indices from resumen_CFAone (χ², df, p, CFI, TLI, NNFI, RMSEA with 90% CI, SRMR), rounds them, and renders a clean publication-ready table with kable. This provides a concise summary of absolute fit for the one-factor CFA under the chosen estimator.

# Extract values (make sure 'resumen_CFAone' contains these names)
nm <- c("chisq","df","pvalue","cfi","tli","nnfi","rmsea",
        "rmsea.ci.lower","rmsea.ci.upper","srmr")

# Keep names and coerce safely (missing names become NA)
vals <- as.numeric(resumen_CFAone[nm])

# Build table
tab_cfa <- tibble(
  Model = "One Factor",
  `χ²`  = round(vals[1], 2),
  df    = as.integer(vals[2]),
  p     = format.pval(vals[3], digits = 3, eps = 1e-3),
  CFI   = round(vals[4], 3),
  TLI   = round(vals[5], 3),
  NNFI  = if (!is.na(vals[6])) round(vals[6], 3) else NA_real_,
  RMSEA = round(vals[7], 3),
  `RMSEA 90% CI` = paste0("[",
                          sprintf("%.3f", vals[8]), ", ",
                          sprintf("%.3f", vals[9]), "]"),
  SRMR  = round(vals[10], 3)
)

# Show table
kable(
  tab_cfa,
  caption = "Table. Model fit indices for the CFA model (Two Factor)",
  align = c("l","r","r","r","r","r","r","r","l","r")
) |>
  kable_styling(full_width = FALSE, position = "center")
Table. Model fit indices for the CFA model (Two Factor)
Model χ² df p CFI TLI NNFI RMSEA RMSEA 90% CI SRMR
One Factor 69.53 35 NA 0.99 0.987 0.987 0.064 [0.042, 0.086] 0.073
WTM2 <- DRA_df

Twofactor <- '
  GIAF  =~ REL1 + REL2 + REL3 + REL4 + ACC + APP
  GIAF2 =~ PRAC1 + PRAC2 + EFF + VAL
'

estimadores_ordinal   <- c("ULS","ULSM","ULSMV","DWLS","WLSM","WLSMV","WLSMVS")
estimadores_continuos <- c("ML","MLM","MLR","MLMV","GLS")

# Specify only the ordinal item variables (avoid non-item columns like "project")
items <- c("REL1","REL2","REL3","REL4","ACC","APP","PRAC1","PRAC2","EFF","VAL")

ajustar_extraer <- function(est, tipo = c("ordinal","continuous")) {
  tipo <- match.arg(tipo)

  # Fit model (suppress warnings; return NULL on errors)
  fit <- tryCatch(
    suppressWarnings({
      if (tipo == "ordinal") {
        cfa(Twofactor, data = WTM2, estimator = est, ordered = items)
      } else {
        cfa(Twofactor, data = WTM2, estimator = est)
      }
    }),
    error = function(e) NULL
  )

  if (is.null(fit)) return(NULL)

  nm <- c("chisq","df","pvalue","cfi","tli","rmsea",
          "rmsea.ci.lower","rmsea.ci.upper","srmr")

  # Some estimators may not return all indices; fill missing with NA
  fm <- tryCatch(
    fitMeasures(fit, fit.measures = nm),
    error = function(e) setNames(rep(NA_real_, length(nm)), nm)
  )

  tibble(
    Model = "Two Factor",
    Type = tipo,
    Estimator = est,
    `χ²`  = round(as.numeric(fm["chisq"]), 2),
    df    = as.integer(fm["df"]),
    p     = format.pval(as.numeric(fm["pvalue"]), digits = 3, eps = 1e-3),
    CFI   = round(as.numeric(fm["cfi"]), 3),
    TLI   = round(as.numeric(fm["tli"]), 3),
    RMSEA = round(as.numeric(fm["rmsea"]), 3),
    `RMSEA 90% CI` = paste0("[",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.lower"])), ", ",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.upper"])), "]"),
    SRMR  = round(as.numeric(fm["srmr"]), 3)
  )
}

tab_ord <- map(estimadores_ordinal, ajustar_extraer, tipo = "ordinal") |>
  compact() |>
  bind_rows()

tab_con <- map(estimadores_continuos, ajustar_extraer, tipo = "continuous") |>
  compact() |>
  bind_rows()

tab_cfa <- bind_rows(tab_ord, tab_con) |>
  arrange(Type, Estimator)

kable(
  tab_cfa,
  caption = "Fit indices by estimator (2-factor CFA): ordinal and continuous",
  align = c("l","l","l","r","r","r","r","r","l","r")  # 10 columns => 10 align entries
) |>
  kable_styling(
    full_width = FALSE,
    position = "center",
    bootstrap_options = c("striped","hover")
  )
Fit indices by estimator (2-factor CFA): ordinal and continuous
Model Type Estimator χ² df p CFI TLI RMSEA RMSEA 90% CI SRMR
Two Factor continuous GLS 125.10 34 <0.001 0.566 0.425 0.106 [0.086, 0.126] 0.085
Two Factor continuous ML 179.92 34 <0.001 0.876 0.836 0.133 [0.115, 0.153] 0.065
Two Factor continuous MLM 179.92 34 <0.001 0.876 0.836 0.133 [0.115, 0.153] 0.065
Two Factor continuous MLMV 179.92 34 <0.001 0.876 0.836 0.133 [0.115, 0.153] 0.065
Two Factor continuous MLR 179.92 34 <0.001 0.876 0.836 0.133 [0.115, 0.153] 0.065
Two Factor ordinal DWLS 125.77 34 <0.001 0.993 0.990 0.106 [0.087, 0.126] 0.062
Two Factor ordinal ULS 45.39 34 NA 0.997 0.996 0.037 [0.000, 0.064] 0.059
Two Factor ordinal ULSM 45.39 34 NA 0.997 0.996 0.037 [0.000, 0.064] 0.059
Two Factor ordinal ULSMV 45.39 34 NA 0.997 0.996 0.037 [0.000, 0.064] 0.059
Two Factor ordinal WLSM 125.77 34 <0.001 0.993 0.990 0.106 [0.087, 0.126] 0.062
Two Factor ordinal WLSMV 125.77 34 <0.001 0.993 0.990 0.106 [0.087, 0.126] 0.062
Two Factor ordinal WLSMVS 125.77 34 <0.001 0.993 0.990 0.106 [0.087, 0.126] 0.062

Specifies a single-factor model (DRA_df loading on all ten items), and fits a CFA with the ULS estimator while treating items as ordinal (ordered = names(DRA_df)). The object resumen_CFAone stores the key fit measures for later reporting.

WTM2 <- DRA_df

#One factor solution
Twofactor <- 'GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP
GIAF2 =~ PRAC1 + PRAC2 + EFF + VAL'

#CFA execution
CFAtwo <- cfa(Twofactor,orthogonal=FALSE, data=WTM2, estimator="ULS",ordered =names(WTM2))
resumen_CFtwo <- fitMeasures(CFAtwo)
resumen_CFtwo
##                  npar                  fmin                 chisq 
##                66.000                 0.094                45.385 
##                    df                pvalue        baseline.chisq 
##                34.000                    NA              3559.575 
##           baseline.df       baseline.pvalue                   cfi 
##                45.000                    NA                 0.997 
##                   tli                  nnfi                   rfi 
##                 0.996                 0.996                 0.983 
##                   nfi                  pnfi                   ifi 
##                 0.987                 0.746                 0.997 
##                   rni                 rmsea        rmsea.ci.lower 
##                 0.997                 0.037                 0.000 
##        rmsea.ci.upper        rmsea.ci.level          rmsea.pvalue 
##                 0.064                 0.900                 0.760 
##        rmsea.close.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
##                 0.050                 0.002                 0.080 
##                   rmr            rmr_nomean                  srmr 
##                 0.041                 0.059                 0.059 
##          srmr_bentler   srmr_bentler_nomean                  crmr 
##                 0.041                 0.059                 0.043 
##           crmr_nomean            srmr_mplus     srmr_mplus_nomean 
##                 0.065                    NA                    NA 
##                 cn_05                 cn_01                   gfi 
##               258.012               297.454                 0.999 
##                  agfi                  pgfi                   mfi 
##                 0.997                 0.340                 0.977 
##                  wrmr 
##                 0.674

Draws a path diagram of the fitted CFA model (CFAtwo) using semPaths. It hides intercepts, displays standardized loadings/paths, uses a circular layout of indicators around the latent factor, and applies readable sizing/colors so you can quickly inspect the model structure and parameter magnitudes.

semPaths(CFAtwo, intercepts = FALSE,edge.label.cex=1, optimizeLatRes = TRUE, groups = "lat",pastel = TRUE, exoVar = FALSE, sizeInt=3,edge.color ="black",esize = 3, label.prop=1,sizeLat = 8,"std", layout="circle2")

Extracts the main fit indices from resumen_CFtwo (χ², df, p, CFI, TLI, NNFI, RMSEA with 90% CI, SRMR), rounds them, and renders a clean publication-ready table with kable. This provides a concise summary of absolute fit for the one-factor CFA under the chosen estimator.

# Extract values (make sure 'resumen_CFtwo' contains these names)
nm <- c("chisq","df","pvalue","cfi","tli","nnfi","rmsea",
        "rmsea.ci.lower","rmsea.ci.upper","srmr")

# Keep names and coerce safely (missing names become NA)
vals <- as.numeric(resumen_CFtwo[nm])

# Build table
tab_cfa <- tibble(
  Model = "One Factor",
  `χ²`  = round(vals[1], 2),
  df    = as.integer(vals[2]),
  p     = format.pval(vals[3], digits = 3, eps = 1e-3),
  CFI   = round(vals[4], 3),
  TLI   = round(vals[5], 3),
  NNFI  = if (!is.na(vals[6])) round(vals[6], 3) else NA_real_,
  RMSEA = round(vals[7], 3),
  `RMSEA 90% CI` = paste0("[",
                          sprintf("%.3f", vals[8]), ", ",
                          sprintf("%.3f", vals[9]), "]"),
  SRMR  = round(vals[10], 3)
)

# Show table
kable(
  tab_cfa,
  caption = "Table. Model fit indices for the CFA model (Two Factor)",
  align = c("l","r","r","r","r","r","r","r","l","r")
) |>
  kable_styling(full_width = FALSE, position = "center")
Table. Model fit indices for the CFA model (Two Factor)
Model χ² df p CFI TLI NNFI RMSEA RMSEA 90% CI SRMR
One Factor 45.39 34 NA 0.997 0.996 0.996 0.037 [0.000, 0.064] 0.059

4.3. Diskussionsforum Sample CFA

Runs a one-factor CFA on the same item set using a battery of estimators for ordinal and continuous assumptions, extracts standard fit indices, and compiles everything into a publication-ready table.

  • Data & model: selects items defines a single-factor model.
  • Estimators tested:
    • Ordinal case (ordered = names(giaf_df)): ULS, ULSM, ULSMV, DWLS, WLSM, WLSMV, WLSMVS.
    • Continuous case (no ordered argument): ML, MLM, MLR, MLMV, GLS.
  • Fitting function (ajustar_extraer):
    • Fits the CFA with the requested estimator and data type (ordinal vs. continuous).
    • Silences warnings and safely returns NULL on errors (so the loop continues).
    • Pulls key fit measures via fitMeasures: \(\chi^2\), df, p, CFI, TLI, NNFI, RMSEA (and 90% CI), SRMR.
    • Returns a tidy row with the Model, Type (ordinal/continuous), Estimator, and indices.
  • Output table: binds rows from all estimators, orders by Type then Estimator, rounds metrics, and renders a kable with a caption for easy comparison.

How to read the table: for each estimator, check CFI/TLI (higher is better), RMSEA/SRMR (lower is better), and the \(\chi^2\)/df/p for completeness. This lets you compare the model’s behavior under ordinal vs. continuous assumptions and across estimators commonly used in practice.

WTM2 <- Diskussionsforum_df

Onefactor <- '
  GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP + PRAC1 + PRAC2 + EFF + VAL
'

estimadores_ordinal   <- c("ULS","ULSM","ULSMV","DWLS","WLSM","WLSMV","WLSMVS")
estimadores_continuos <- c("ML","MLM","MLR","MLMV","GLS")

# IMPORTANT: specify only the ordinal item variables (avoid non-items like "project")
items <- c("REL1","REL2","REL3","REL4","ACC","APP","PRAC1","PRAC2","EFF","VAL")

ajustar_extraer <- function(est, tipo = c("ordinal","continuous")) {
  tipo <- match.arg(tipo)

  fit <- tryCatch(
    suppressWarnings({
      if (tipo == "ordinal") {
        cfa(Onefactor, data = WTM2, estimator = est, ordered = items)
      } else {
        cfa(Onefactor, data = WTM2, estimator = est)
      }
    }),
    error = function(e) NULL
  )

  if (is.null(fit)) return(NULL)

  nm <- c("chisq","df","pvalue","cfi","tli","rmsea",
          "rmsea.ci.lower","rmsea.ci.upper","srmr")

  fm <- tryCatch(
    fitMeasures(fit, fit.measures = nm),
    error = function(e) setNames(rep(NA_real_, length(nm)), nm)
  )

  tibble(
    Model = "One Factor",
    Type = tipo,
    Estimator = est,
    `χ²`  = round(as.numeric(fm["chisq"]), 2),
    df    = as.integer(fm["df"]),
    p     = format.pval(as.numeric(fm["pvalue"]), digits = 3, eps = 1e-3),
    CFI   = round(as.numeric(fm["cfi"]), 3),
    TLI   = round(as.numeric(fm["tli"]), 3),
    RMSEA = round(as.numeric(fm["rmsea"]), 3),
    `RMSEA 90% CI` = paste0("[",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.lower"])), ", ",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.upper"])), "]"),
    SRMR  = round(as.numeric(fm["srmr"]), 3)
  )
}

tab_ord <- map(estimadores_ordinal, ajustar_extraer, tipo = "ordinal") |>
  compact() |>
  bind_rows()

tab_con <- map(estimadores_continuos, ajustar_extraer, tipo = "continuous") |>
  compact() |>
  bind_rows()

tab_cfa <- bind_rows(tab_ord, tab_con) |>
  arrange(Type, Estimator)

kable(
  tab_cfa,
  caption = "Fit indices by estimator (1-factor CFA): ordinal and continuous",
  align = c("l","l","l","r","r","r","r","r","l","r")  # 10 columns => 10 aligns
) |>
  kable_styling(
    full_width = FALSE,
    position = "center",
    bootstrap_options = c("striped","hover")
  )
Fit indices by estimator (1-factor CFA): ordinal and continuous
Model Type Estimator χ² df p CFI TLI RMSEA RMSEA 90% CI SRMR
One Factor continuous GLS 108.13 35 <0.001 0.577 0.457 0.091 [0.072, 0.111] 0.072
One Factor continuous ML 167.42 35 <0.001 0.897 0.867 0.122 [0.104, 0.141] 0.054
One Factor continuous MLM 167.42 35 <0.001 0.897 0.867 0.122 [0.104, 0.141] 0.054
One Factor continuous MLMV 167.42 35 <0.001 0.897 0.867 0.122 [0.104, 0.141] 0.054
One Factor continuous MLR 167.42 35 <0.001 0.897 0.867 0.122 [0.104, 0.141] 0.054
One Factor ordinal DWLS 155.06 35 <0.001 0.988 0.984 0.117 [0.098, 0.136] 0.066
One Factor ordinal ULS 47.05 35 NA 0.997 0.996 0.037 [0.000, 0.062] 0.058
One Factor ordinal ULSM 47.05 35 NA 0.997 0.996 0.037 [0.000, 0.062] 0.058
One Factor ordinal ULSMV 47.05 35 NA 0.997 0.996 0.037 [0.000, 0.062] 0.058
One Factor ordinal WLSM 155.06 35 <0.001 0.988 0.984 0.117 [0.098, 0.136] 0.066
One Factor ordinal WLSMV 155.06 35 <0.001 0.988 0.984 0.117 [0.098, 0.136] 0.066
One Factor ordinal WLSMVS 155.06 35 <0.001 0.988 0.984 0.117 [0.098, 0.136] 0.066
WTM2 <- Diskussionsforum_df

#One factor solution
Onefactor <- '
  GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP + PRAC1 + PRAC2 + EFF + VAL
'

#CFA execution
CFAone <- cfa(Onefactor,orthogonal=FALSE, data=WTM2, estimator="ULS",ordered =names(WTM2))
resumen_CFAone <- fitMeasures(CFAone)
resumen_CFAone
##                  npar                  fmin                 chisq 
##                56.000                 0.093                47.047 
##                    df                pvalue        baseline.chisq 
##                35.000                    NA              3774.595 
##           baseline.df       baseline.pvalue                   cfi 
##                45.000                    NA                 0.997 
##                   tli                  nnfi                   rfi 
##                 0.996                 0.996                 0.984 
##                   nfi                  pnfi                   ifi 
##                 0.988                 0.768                 0.997 
##                   rni                 rmsea        rmsea.ci.lower 
##                 0.997                 0.037                 0.000 
##        rmsea.ci.upper        rmsea.ci.level          rmsea.pvalue 
##                 0.062                 0.900                 0.780 
##        rmsea.close.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
##                 0.050                 0.001                 0.080 
##                   rmr            rmr_nomean                  srmr 
##                 0.043                 0.058                 0.058 
##          srmr_bentler   srmr_bentler_nomean                  crmr 
##                 0.043                 0.058                 0.045 
##           crmr_nomean            srmr_mplus     srmr_mplus_nomean 
##                 0.064                    NA                    NA 
##                 cn_05                 cn_01                   gfi 
##               267.758               308.146                 0.999 
##                  agfi                  pgfi                   mfi 
##                 0.996                 0.384                 0.976 
##                  wrmr 
##                 0.719

Diskussionsforumws a path diagram of the fitted CFA model (CFAone) using semPaths. It hides intercepts, displays standardized loadings/paths, uses a circular layout of indicators around the latent factor, and applies readable sizing/colors so you can quickly inspect the model structure and parameter magnitudes.

semPaths(CFAone, intercepts = FALSE,edge.label.cex=1, optimizeLatRes = TRUE, groups = "lat",pastel = TRUE, exoVar = FALSE, sizeInt=3,edge.color ="black",esize = 3, label.prop=1,sizeLat = 8,"std", layout="circle2")

Extracts the main fit indices from resumen_CFAone (χ², df, p, CFI, TLI, NNFI, RMSEA with 90% CI, SRMR), rounds them, and renders a clean publication-ready table with kable. This provides a concise summary of absolute fit for the one-factor CFA under the chosen estimator.

# Extract values (make sure 'resumen_CFAone' contains these names)
nm <- c("chisq","df","pvalue","cfi","tli","nnfi","rmsea",
        "rmsea.ci.lower","rmsea.ci.upper","srmr")

# Keep names and coerce safely (missing names become NA)
vals <- as.numeric(resumen_CFAone[nm])

# Build table
tab_cfa <- tibble(
  Model = "One Factor",
  `χ²`  = round(vals[1], 2),
  df    = as.integer(vals[2]),
  p     = format.pval(vals[3], digits = 3, eps = 1e-3),
  CFI   = round(vals[4], 3),
  TLI   = round(vals[5], 3),
  NNFI  = if (!is.na(vals[6])) round(vals[6], 3) else NA_real_,
  RMSEA = round(vals[7], 3),
  `RMSEA 90% CI` = paste0("[",
                          sprintf("%.3f", vals[8]), ", ",
                          sprintf("%.3f", vals[9]), "]"),
  SRMR  = round(vals[10], 3)
)

# Show table
kable(
  tab_cfa,
  caption = "Table. Model fit indices for the CFA model (Two Factor)",
  align = c("l","r","r","r","r","r","r","r","l","r")
) |>
  kable_styling(full_width = FALSE, position = "center")
Table. Model fit indices for the CFA model (Two Factor)
Model χ² df p CFI TLI NNFI RMSEA RMSEA 90% CI SRMR
One Factor 47.05 35 NA 0.997 0.996 0.996 0.037 [0.000, 0.062] 0.058
WTM2 <- Diskussionsforum_df

Twofactor <- '
  GIAF  =~ REL1 + REL2 + REL3 + REL4 + ACC + APP
  GIAF2 =~ PRAC1 + PRAC2 + EFF + VAL
'

estimadores_ordinal   <- c("ULS","ULSM","ULSMV","DWLS","WLSM","WLSMV","WLSMVS")
estimadores_continuos <- c("ML","MLM","MLR","MLMV","GLS")

# Specify only the ordinal item variables (avoid non-item columns like "project")
items <- c("REL1","REL2","REL3","REL4","ACC","APP","PRAC1","PRAC2","EFF","VAL")

ajustar_extraer <- function(est, tipo = c("ordinal","continuous")) {
  tipo <- match.arg(tipo)

  # Fit model (suppress warnings; return NULL on errors)
  fit <- tryCatch(
    suppressWarnings({
      if (tipo == "ordinal") {
        cfa(Twofactor, data = WTM2, estimator = est, ordered = items)
      } else {
        cfa(Twofactor, data = WTM2, estimator = est)
      }
    }),
    error = function(e) NULL
  )

  if (is.null(fit)) return(NULL)

  nm <- c("chisq","df","pvalue","cfi","tli","rmsea",
          "rmsea.ci.lower","rmsea.ci.upper","srmr")

  # Some estimators may not return all indices; fill missing with NA
  fm <- tryCatch(
    fitMeasures(fit, fit.measures = nm),
    error = function(e) setNames(rep(NA_real_, length(nm)), nm)
  )

  tibble(
    Model = "Two Factor",
    Type = tipo,
    Estimator = est,
    `χ²`  = round(as.numeric(fm["chisq"]), 2),
    df    = as.integer(fm["df"]),
    p     = format.pval(as.numeric(fm["pvalue"]), digits = 3, eps = 1e-3),
    CFI   = round(as.numeric(fm["cfi"]), 3),
    TLI   = round(as.numeric(fm["tli"]), 3),
    RMSEA = round(as.numeric(fm["rmsea"]), 3),
    `RMSEA 90% CI` = paste0("[",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.lower"])), ", ",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.upper"])), "]"),
    SRMR  = round(as.numeric(fm["srmr"]), 3)
  )
}

tab_ord <- map(estimadores_ordinal, ajustar_extraer, tipo = "ordinal") |>
  compact() |>
  bind_rows()

tab_con <- map(estimadores_continuos, ajustar_extraer, tipo = "continuous") |>
  compact() |>
  bind_rows()

tab_cfa <- bind_rows(tab_ord, tab_con) |>
  arrange(Type, Estimator)

kable(
  tab_cfa,
  caption = "Fit indices by estimator (2-factor CFA): ordinal and continuous",
  align = c("l","l","l","r","r","r","r","r","l","r")  # 10 columns => 10 align entries
) |>
  kable_styling(
    full_width = FALSE,
    position = "center",
    bootstrap_options = c("striped","hover")
  )
Fit indices by estimator (2-factor CFA): ordinal and continuous
Model Type Estimator χ² df p CFI TLI RMSEA RMSEA 90% CI SRMR
Two Factor continuous GLS 92.04 34 <0.001 0.665 0.556 0.082 [0.062, 0.103] 0.073
Two Factor continuous ML 133.44 34 <0.001 0.922 0.897 0.108 [0.089, 0.127] 0.049
Two Factor continuous MLM 133.44 34 <0.001 0.922 0.897 0.108 [0.089, 0.127] 0.049
Two Factor continuous MLMV 133.44 34 <0.001 0.922 0.897 0.108 [0.089, 0.127] 0.049
Two Factor continuous MLR 133.44 34 <0.001 0.922 0.897 0.108 [0.089, 0.127] 0.049
Two Factor ordinal DWLS 114.64 34 <0.001 0.992 0.989 0.097 [0.078, 0.117] 0.057
Two Factor ordinal ULS 34.64 34 NA 1.000 1.000 0.009 [0.000, 0.047] 0.050
Two Factor ordinal ULSM 34.64 34 NA 1.000 1.000 0.009 [0.000, 0.047] 0.050
Two Factor ordinal ULSMV 34.64 34 NA 1.000 1.000 0.009 [0.000, 0.047] 0.050
Two Factor ordinal WLSM 114.64 34 <0.001 0.992 0.989 0.097 [0.078, 0.117] 0.057
Two Factor ordinal WLSMV 114.64 34 <0.001 0.992 0.989 0.097 [0.078, 0.117] 0.057
Two Factor ordinal WLSMVS 114.64 34 <0.001 0.992 0.989 0.097 [0.078, 0.117] 0.057

Specifies a single-factor model (Diskussionsforum_df loading on all ten items), and fits a CFA with the ULS estimator while treating items as ordinal (ordered = names(Diskussionsforum_df)). The object resumen_CFAone stores the key fit measures for later reporting.

WTM2 <- Diskussionsforum_df

#One factor solution
Twofactor <- 'GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP
GIAF2 =~ PRAC1 + PRAC2 + EFF + VAL'

#CFA execution
CFAtwo <- cfa(Twofactor,orthogonal=FALSE, data=WTM2, estimator="ULS",ordered =names(WTM2))
resumen_CFtwo <- fitMeasures(CFAtwo)
resumen_CFtwo
##                  npar                  fmin                 chisq 
##                57.000                 0.068                34.644 
##                    df                pvalue        baseline.chisq 
##                34.000                    NA              3774.595 
##           baseline.df       baseline.pvalue                   cfi 
##                45.000                    NA                 1.000 
##                   tli                  nnfi                   rfi 
##                 1.000                 1.000                 0.988 
##                   nfi                  pnfi                   ifi 
##                 0.991                 0.749                 1.000 
##                   rni                 rmsea        rmsea.ci.lower 
##                 1.000                 0.009                 0.000 
##        rmsea.ci.upper        rmsea.ci.level          rmsea.pvalue 
##                 0.047                 0.900                 0.968 
##        rmsea.close.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
##                 0.050                 0.000                 0.080 
##                   rmr            rmr_nomean                  srmr 
##                 0.037                 0.050                 0.050 
##          srmr_bentler   srmr_bentler_nomean                  crmr 
##                 0.037                 0.050                 0.039 
##           crmr_nomean            srmr_mplus     srmr_mplus_nomean 
##                 0.055                    NA                    NA 
##                 cn_05                 cn_01                   gfi 
##               354.529               408.782                 0.999 
##                  agfi                  pgfi                   mfi 
##                 0.997                 0.373                 0.999 
##                  wrmr 
##                 0.617

Diskussionsforumws a path diagram of the fitted CFA model (CFAtwo) using semPaths. It hides intercepts, displays standardized loadings/paths, uses a circular layout of indicators around the latent factor, and applies readable sizing/colors so you can quickly inspect the model structure and parameter magnitudes.

semPaths(CFAtwo, intercepts = FALSE,edge.label.cex=1, optimizeLatRes = TRUE, groups = "lat",pastel = TRUE, exoVar = FALSE, sizeInt=3,edge.color ="black",esize = 3, label.prop=1,sizeLat = 8,"std", layout="circle2")

Extracts the main fit indices from resumen_CFtwo (χ², df, p, CFI, TLI, NNFI, RMSEA with 90% CI, SRMR), rounds them, and renders a clean publication-ready table with kable. This provides a concise summary of absolute fit for the one-factor CFA under the chosen estimator.

# Extract values (make sure 'resumen_CFtwo' contains these names)
nm <- c("chisq","df","pvalue","cfi","tli","nnfi","rmsea",
        "rmsea.ci.lower","rmsea.ci.upper","srmr")

# Keep names and coerce safely (missing names become NA)
vals <- as.numeric(resumen_CFtwo[nm])

# Build table
tab_cfa <- tibble(
  Model = "One Factor",
  `χ²`  = round(vals[1], 2),
  df    = as.integer(vals[2]),
  p     = format.pval(vals[3], digits = 3, eps = 1e-3),
  CFI   = round(vals[4], 3),
  TLI   = round(vals[5], 3),
  NNFI  = if (!is.na(vals[6])) round(vals[6], 3) else NA_real_,
  RMSEA = round(vals[7], 3),
  `RMSEA 90% CI` = paste0("[",
                          sprintf("%.3f", vals[8]), ", ",
                          sprintf("%.3f", vals[9]), "]"),
  SRMR  = round(vals[10], 3)
)

# Show table
kable(
  tab_cfa,
  caption = "Table. Model fit indices for the CFA model (Two Factor)",
  align = c("l","r","r","r","r","r","r","r","l","r")
) |>
  kable_styling(full_width = FALSE, position = "center")
Table. Model fit indices for the CFA model (Two Factor)
Model χ² df p CFI TLI NNFI RMSEA RMSEA 90% CI SRMR
One Factor 34.64 34 NA 1 1 1 0.009 [0.000, 0.047] 0.05

4.4. CMHA Sample CFA

Runs a one-factor CFA on the same item set using a battery of estimators for ordinal and continuous assumptions, extracts standard fit indices, and compiles everything into a publication-ready table.

  • Data & model: selects items defines a single-factor model.
  • Estimators tested:
    • Ordinal case (ordered = names(giaf_df)): ULS, ULSM, ULSMV, DWLS, WLSM, WLSMV, WLSMVS.
    • Continuous case (no ordered argument): ML, MLM, MLR, MLMV, GLS.
  • Fitting function (ajustar_extraer):
    • Fits the CFA with the requested estimator and data type (ordinal vs. continuous).
    • Silences warnings and safely returns NULL on errors (so the loop continues).
    • Pulls key fit measures via fitMeasures: \(\chi^2\), df, p, CFI, TLI, NNFI, RMSEA (and 90% CI), SRMR.
    • Returns a tidy row with the Model, Type (ordinal/continuous), Estimator, and indices.
  • Output table: binds rows from all estimators, orders by Type then Estimator, rounds metrics, and renders a kable with a caption for easy comparison.

How to read the table: for each estimator, check CFI/TLI (higher is better), RMSEA/SRMR (lower is better), and the \(\chi^2\)/df/p for completeness. This lets you compare the model’s behavior under ordinal vs. continuous assumptions and across estimators commonly used in practice.

WTM2 <- CMHA_df

Onefactor <- '
  GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP + PRAC1 + PRAC2 + EFF + VAL
'

estimadores_ordinal   <- c("ULS","ULSM","ULSMV","DWLS","WLSM","WLSMV","WLSMVS")
estimadores_continuos <- c("ML","MLM","MLR","MLMV","GLS")

# IMPORTANT: specify only the ordinal item variables (avoid non-items like "project")
items <- c("REL1","REL2","REL3","REL4","ACC","APP","PRAC1","PRAC2","EFF","VAL")

ajustar_extraer <- function(est, tipo = c("ordinal","continuous")) {
  tipo <- match.arg(tipo)

  fit <- tryCatch(
    suppressWarnings({
      if (tipo == "ordinal") {
        cfa(Onefactor, data = WTM2, estimator = est, ordered = items)
      } else {
        cfa(Onefactor, data = WTM2, estimator = est)
      }
    }),
    error = function(e) NULL
  )

  if (is.null(fit)) return(NULL)

  nm <- c("chisq","df","pvalue","cfi","tli","rmsea",
          "rmsea.ci.lower","rmsea.ci.upper","srmr")

  fm <- tryCatch(
    fitMeasures(fit, fit.measures = nm),
    error = function(e) setNames(rep(NA_real_, length(nm)), nm)
  )

  tibble(
    Model = "One Factor",
    Type = tipo,
    Estimator = est,
    `χ²`  = round(as.numeric(fm["chisq"]), 2),
    df    = as.integer(fm["df"]),
    p     = format.pval(as.numeric(fm["pvalue"]), digits = 3, eps = 1e-3),
    CFI   = round(as.numeric(fm["cfi"]), 3),
    TLI   = round(as.numeric(fm["tli"]), 3),
    RMSEA = round(as.numeric(fm["rmsea"]), 3),
    `RMSEA 90% CI` = paste0("[",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.lower"])), ", ",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.upper"])), "]"),
    SRMR  = round(as.numeric(fm["srmr"]), 3)
  )
}

tab_ord <- map(estimadores_ordinal, ajustar_extraer, tipo = "ordinal") |>
  compact() |>
  bind_rows()

tab_con <- map(estimadores_continuos, ajustar_extraer, tipo = "continuous") |>
  compact() |>
  bind_rows()

tab_cfa <- bind_rows(tab_ord, tab_con) |>
  arrange(Type, Estimator)

kable(
  tab_cfa,
  caption = "Fit indices by estimator (1-factor CFA): ordinal and continuous",
  align = c("l","l","l","r","r","r","r","r","l","r")  # 10 columns => 10 aligns
) |>
  kable_styling(
    full_width = FALSE,
    position = "center",
    bootstrap_options = c("striped","hover")
  )
Fit indices by estimator (1-factor CFA): ordinal and continuous
Model Type Estimator χ² df p CFI TLI RMSEA RMSEA 90% CI SRMR
One Factor continuous GLS 103.79 35 <0.001 0.364 0.182 0.137 [0.107, 0.168] 0.118
One Factor continuous ML 189.94 35 <0.001 0.832 0.784 0.205 [0.177, 0.234] 0.076
One Factor continuous MLM 189.94 35 <0.001 0.832 0.784 0.205 [0.177, 0.234] 0.076
One Factor continuous MLMV 189.94 35 <0.001 0.832 0.784 0.205 [0.177, 0.234] 0.076
One Factor continuous MLR 189.94 35 <0.001 0.832 0.784 0.205 [0.177, 0.234] 0.076
One Factor ordinal DWLS 166.61 35 <0.001 0.986 0.983 0.190 [0.162, 0.220] 0.098
One Factor ordinal ULS 36.54 35 NA 0.999 0.999 0.021 [0.000, 0.074] 0.080
One Factor ordinal ULSM 36.54 35 NA 0.999 0.999 0.021 [0.000, 0.074] 0.080
One Factor ordinal ULSMV 36.54 35 NA 0.999 0.999 0.021 [0.000, 0.074] 0.080
One Factor ordinal WLSM 166.61 35 <0.001 0.986 0.983 0.190 [0.162, 0.220] 0.098
One Factor ordinal WLSMV 166.61 35 <0.001 0.986 0.983 0.190 [0.162, 0.220] 0.098
One Factor ordinal WLSMVS 166.61 35 <0.001 0.986 0.983 0.190 [0.162, 0.220] 0.098
WTM2 <- CMHA_df

#One factor solution
Onefactor <- '
  GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP + PRAC1 + PRAC2 + EFF + VAL
'

#CFA execution
CFAone <- cfa(Onefactor,orthogonal=FALSE, data=WTM2, estimator="ULS",ordered =names(WTM2))
resumen_CFAone <- fitMeasures(CFAone)
resumen_CFAone
##                  npar                  fmin                 chisq 
##                53.000                 0.174                36.542 
##                    df                pvalue        baseline.chisq 
##                35.000                    NA              2215.948 
##           baseline.df       baseline.pvalue                   cfi 
##                45.000                    NA                 0.999 
##                   tli                  nnfi                   rfi 
##                 0.999                 0.999                 0.979 
##                   nfi                  pnfi                   ifi 
##                 0.984                 0.765                 0.999 
##                   rni                 rmsea        rmsea.ci.lower 
##                 0.999                 0.021                 0.000 
##        rmsea.ci.upper        rmsea.ci.level          rmsea.pvalue 
##                 0.074                 0.900                 0.758 
##        rmsea.close.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
##                 0.050                 0.030                 0.080 
##                   rmr            rmr_nomean                  srmr 
##                 0.060                 0.080                 0.080 
##          srmr_bentler   srmr_bentler_nomean                  crmr 
##                 0.060                 0.080                 0.063 
##           crmr_nomean            srmr_mplus     srmr_mplus_nomean 
##                 0.088                    NA                    NA 
##                 cn_05                 cn_01                   gfi 
##               142.738               164.197                 0.997 
##                  agfi                  pgfi                   mfi 
##                 0.993                 0.397                 0.993 
##                  wrmr 
##                 0.644

CMHAws a path diagram of the fitted CFA model (CFAone) using semPaths. It hides intercepts, displays standardized loadings/paths, uses a circular layout of indicators around the latent factor, and applies readable sizing/colors so you can quickly inspect the model structure and parameter magnitudes.

semPaths(CFAone, intercepts = FALSE,edge.label.cex=1, optimizeLatRes = TRUE, groups = "lat",pastel = TRUE, exoVar = FALSE, sizeInt=3,edge.color ="black",esize = 3, label.prop=1,sizeLat = 8,"std", layout="circle2")

Extracts the main fit indices from resumen_CFAone (χ², df, p, CFI, TLI, NNFI, RMSEA with 90% CI, SRMR), rounds them, and renders a clean publication-ready table with kable. This provides a concise summary of absolute fit for the one-factor CFA under the chosen estimator.

# Extract values (make sure 'resumen_CFAone' contains these names)
nm <- c("chisq","df","pvalue","cfi","tli","nnfi","rmsea",
        "rmsea.ci.lower","rmsea.ci.upper","srmr")

# Keep names and coerce safely (missing names become NA)
vals <- as.numeric(resumen_CFAone[nm])

# Build table
tab_cfa <- tibble(
  Model = "One Factor",
  `χ²`  = round(vals[1], 2),
  df    = as.integer(vals[2]),
  p     = format.pval(vals[3], digits = 3, eps = 1e-3),
  CFI   = round(vals[4], 3),
  TLI   = round(vals[5], 3),
  NNFI  = if (!is.na(vals[6])) round(vals[6], 3) else NA_real_,
  RMSEA = round(vals[7], 3),
  `RMSEA 90% CI` = paste0("[",
                          sprintf("%.3f", vals[8]), ", ",
                          sprintf("%.3f", vals[9]), "]"),
  SRMR  = round(vals[10], 3)
)

# Show table
kable(
  tab_cfa,
  caption = "Table. Model fit indices for the CFA model (Two Factor)",
  align = c("l","r","r","r","r","r","r","r","l","r")
) |>
  kable_styling(full_width = FALSE, position = "center")
Table. Model fit indices for the CFA model (Two Factor)
Model χ² df p CFI TLI NNFI RMSEA RMSEA 90% CI SRMR
One Factor 36.54 35 NA 0.999 0.999 0.999 0.021 [0.000, 0.074] 0.08
WTM2 <- CMHA_df

Twofactor <- '
  GIAF  =~ REL1 + REL2 + REL3 + REL4 + ACC + APP
  GIAF2 =~ PRAC1 + PRAC2 + EFF + VAL
'

estimadores_ordinal   <- c("ULS","ULSM","ULSMV","DWLS","WLSM","WLSMV","WLSMVS")
estimadores_continuos <- c("ML","MLM","MLR","MLMV","GLS")

# Specify only the ordinal item variables (avoid non-item columns like "project")
items <- c("REL1","REL2","REL3","REL4","ACC","APP","PRAC1","PRAC2","EFF","VAL")

ajustar_extraer <- function(est, tipo = c("ordinal","continuous")) {
  tipo <- match.arg(tipo)

  # Fit model (suppress warnings; return NULL on errors)
  fit <- tryCatch(
    suppressWarnings({
      if (tipo == "ordinal") {
        cfa(Twofactor, data = WTM2, estimator = est, ordered = items)
      } else {
        cfa(Twofactor, data = WTM2, estimator = est)
      }
    }),
    error = function(e) NULL
  )

  if (is.null(fit)) return(NULL)

  nm <- c("chisq","df","pvalue","cfi","tli","rmsea",
          "rmsea.ci.lower","rmsea.ci.upper","srmr")

  # Some estimators may not return all indices; fill missing with NA
  fm <- tryCatch(
    fitMeasures(fit, fit.measures = nm),
    error = function(e) setNames(rep(NA_real_, length(nm)), nm)
  )

  tibble(
    Model = "Two Factor",
    Type = tipo,
    Estimator = est,
    `χ²`  = round(as.numeric(fm["chisq"]), 2),
    df    = as.integer(fm["df"]),
    p     = format.pval(as.numeric(fm["pvalue"]), digits = 3, eps = 1e-3),
    CFI   = round(as.numeric(fm["cfi"]), 3),
    TLI   = round(as.numeric(fm["tli"]), 3),
    RMSEA = round(as.numeric(fm["rmsea"]), 3),
    `RMSEA 90% CI` = paste0("[",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.lower"])), ", ",
                            sprintf("%.3f", as.numeric(fm["rmsea.ci.upper"])), "]"),
    SRMR  = round(as.numeric(fm["srmr"]), 3)
  )
}

tab_ord <- map(estimadores_ordinal, ajustar_extraer, tipo = "ordinal") |>
  compact() |>
  bind_rows()

tab_con <- map(estimadores_continuos, ajustar_extraer, tipo = "continuous") |>
  compact() |>
  bind_rows()

tab_cfa <- bind_rows(tab_ord, tab_con) |>
  arrange(Type, Estimator)

kable(
  tab_cfa,
  caption = "Fit indices by estimator (2-factor CFA): ordinal and continuous",
  align = c("l","l","l","r","r","r","r","r","l","r")  # 10 columns => 10 align entries
) |>
  kable_styling(
    full_width = FALSE,
    position = "center",
    bootstrap_options = c("striped","hover")
  )
Fit indices by estimator (2-factor CFA): ordinal and continuous
Model Type Estimator χ² df p CFI TLI RMSEA RMSEA 90% CI SRMR
Two Factor continuous GLS 98.25 34 <0.001 0.406 0.214 0.135 [0.104, 0.166] 0.122
Two Factor continuous ML 163.49 34 <0.001 0.860 0.814 0.190 [0.162, 0.220] 0.069
Two Factor continuous MLM 163.49 34 <0.001 0.860 0.814 0.190 [0.162, 0.220] 0.069
Two Factor continuous MLMV 163.49 34 <0.001 0.860 0.814 0.190 [0.162, 0.220] 0.069
Two Factor continuous MLR 163.49 34 <0.001 0.860 0.814 0.190 [0.162, 0.220] 0.069
Two Factor ordinal DWLS 127.03 34 <0.001 0.990 0.987 0.162 [0.133, 0.193] 0.084
Two Factor ordinal ULS 28.43 34 NA 1.000 1.003 0.000 [0.000, 0.053] 0.070
Two Factor ordinal ULSM 28.43 34 NA 1.000 1.003 0.000 [0.000, 0.053] 0.070
Two Factor ordinal ULSMV 28.43 34 NA 1.000 1.003 0.000 [0.000, 0.053] 0.070
Two Factor ordinal WLSM 127.03 34 <0.001 0.990 0.987 0.162 [0.133, 0.193] 0.084
Two Factor ordinal WLSMV 127.03 34 <0.001 0.990 0.987 0.162 [0.133, 0.193] 0.084
Two Factor ordinal WLSMVS 127.03 34 <0.001 0.990 0.987 0.162 [0.133, 0.193] 0.084

Specifies a single-factor model (CMHA_df loading on all ten items), and fits a CFA with the ULS estimator while treating items as ordinal (ordered = names(CMHA_df)). The object resumen_CFAone stores the key fit measures for later reporting.

WTM2 <- CMHA_df

#One factor solution
Twofactor <- 'GIAF =~ REL1 + REL2 + REL3 + REL4 + ACC + APP
GIAF2 =~ PRAC1 + PRAC2 + EFF + VAL'

#CFA execution
CFAtwo <- cfa(Twofactor,orthogonal=FALSE, data=WTM2, estimator="ULS",ordered =names(WTM2))
resumen_CFtwo <- fitMeasures(CFAtwo)
resumen_CFtwo
##                  npar                  fmin                 chisq 
##                54.000                 0.135                28.428 
##                    df                pvalue        baseline.chisq 
##                34.000                    NA              2215.948 
##           baseline.df       baseline.pvalue                   cfi 
##                45.000                    NA                 1.000 
##                   tli                  nnfi                   rfi 
##                 1.003                 1.003                 0.983 
##                   nfi                  pnfi                   ifi 
##                 0.987                 0.746                 1.003 
##                   rni                 rmsea        rmsea.ci.lower 
##                 1.003                 0.000                 0.000 
##        rmsea.ci.upper        rmsea.ci.level          rmsea.pvalue 
##                 0.053                 0.900                 0.937 
##        rmsea.close.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
##                 0.050                 0.004                 0.080 
##                   rmr            rmr_nomean                  srmr 
##                 0.053                 0.070                 0.070 
##          srmr_bentler   srmr_bentler_nomean                  crmr 
##                 0.053                 0.070                 0.056 
##           crmr_nomean            srmr_mplus     srmr_mplus_nomean 
##                 0.078                    NA                    NA 
##                 cn_05                 cn_01                   gfi 
##               178.803               206.088                 0.998 
##                  agfi                  pgfi                   mfi 
##                 0.994                 0.386                 1.027 
##                  wrmr 
##                 0.568

CMHAws a path diagram of the fitted CFA model (CFAtwo) using semPaths. It hides intercepts, displays standardized loadings/paths, uses a circular layout of indicators around the latent factor, and applies readable sizing/colors so you can quickly inspect the model structure and parameter magnitudes.

semPaths(CFAtwo, intercepts = FALSE,edge.label.cex=1, optimizeLatRes = TRUE, groups = "lat",pastel = TRUE, exoVar = FALSE, sizeInt=3,edge.color ="black",esize = 3, label.prop=1,sizeLat = 8,"std", layout="circle2")

Extracts the main fit indices from resumen_CFtwo (χ², df, p, CFI, TLI, NNFI, RMSEA with 90% CI, SRMR), rounds them, and renders a clean publication-ready table with kable. This provides a concise summary of absolute fit for the one-factor CFA under the chosen estimator.

# Extract values (make sure 'resumen_CFtwo' contains these names)
nm <- c("chisq","df","pvalue","cfi","tli","nnfi","rmsea",
        "rmsea.ci.lower","rmsea.ci.upper","srmr")

# Keep names and coerce safely (missing names become NA)
vals <- as.numeric(resumen_CFtwo[nm])

# Build table
tab_cfa <- tibble(
  Model = "One Factor",
  `χ²`  = round(vals[1], 2),
  df    = as.integer(vals[2]),
  p     = format.pval(vals[3], digits = 3, eps = 1e-3),
  CFI   = round(vals[4], 3),
  TLI   = round(vals[5], 3),
  NNFI  = if (!is.na(vals[6])) round(vals[6], 3) else NA_real_,
  RMSEA = round(vals[7], 3),
  `RMSEA 90% CI` = paste0("[",
                          sprintf("%.3f", vals[8]), ", ",
                          sprintf("%.3f", vals[9]), "]"),
  SRMR  = round(vals[10], 3)
)

# Show table
kable(
  tab_cfa,
  caption = "Table. Model fit indices for the CFA model (Two Factor)",
  align = c("l","r","r","r","r","r","r","r","l","r")
) |>
  kable_styling(full_width = FALSE, position = "center")
Table. Model fit indices for the CFA model (Two Factor)
Model χ² df p CFI TLI NNFI RMSEA RMSEA 90% CI SRMR
One Factor 28.43 34 NA 1 1.003 1.003 0 [0.000, 0.053] 0.07

5. FACTOR INVARIANCE

Prepares and tests measurement invariance by project for the one-factor model using lavaan:

  1. Converts project to a factor so lavaan can form groups.
  2. Fits four nested multi-group CFA models with the ULS estimator (non-orthogonal factors):
    • Configural: same factor structure in both groups, no equality constraints.
    • Metric: adds equality constraints on factor loadings across groups.
    • Scalar: additionally constrains intercepts (so both loadings and intercepts are equal).
    • Strict: further constrains residual variances to be equal. For each fit, summary(..., fit.measures = TRUE) computes standard fit indices.
  3. Uses lavaan::anova(...) to compare the models as nested contrasts (from fewer to more constraints).
    • Inspect the Chi-square difference tests (and Δdf).
    • Non-significant χ² differences indicate that adding constraints does not worsen fit → the corresponding level of invariance is supported.
giaf_invariance$project <- as.factor(giaf_invariance$project)
# Configurational model
cfa.config <- cfa(Onefactor,orthogonal=FALSE, data=giaf_invariance, estimator="ULS", group = "project")
resumen_cfa.config <- summary(cfa.config, fit.measures=TRUE)

# Metric model
cfa.metric <- cfa(Onefactor, orthogonal=FALSE, data = giaf_invariance, estimator = "ULS", group = "project", group.equal = "loadings")
resumen_cfa.metric <- summary(cfa.metric, fit.measures=TRUE)

# Scalar model
cfa.scalar <- cfa(Onefactor, orthogonal=FALSE, data = giaf_invariance, estimator = "ULS", group = "project", group.equal = c("loadings","intercepts"))
resumen_cfa.scalar <- summary(cfa.scalar, fit.measures=TRUE)

# Strict model
cfa.strict <- cfa(Onefactor, orthogonal=FALSE, data = giaf_invariance, estimator = "ULS", group = "project", group.equal = c("loadings","intercepts","residuals"))
resumen_cfa.strict <- summary(cfa.strict, fit.measures=TRUE)

#datos mejor organizados
lavaan::anova(cfa.strict,cfa.scalar,cfa.metric, cfa.config)

While the configural model provides the baseline, adding equality constraints leads to significant decreases in model fit.

Configural → Metric: Δχ² = 39.67, Δdf = 18, p = 0.0023 → significant → factor loadings are not fully equivalent across projects.

Metric → Scalar: Δχ² = 142.15, Δdf = 18, p < 0.001 → significant → intercepts/thresholds are not equivalent.

Scalar → Strict: Δχ² = 220.11, Δdf = 20, p < 0.001 → significant → residual variances are not equivalent.

Conclusion: The model does not retain comparable fit under increasing constraints. As a result, group comparisons (e.g., latent means or observed means) may be biased. Consider evaluating partial invariance (freeing problematic parameters), inspecting modification indices, and/or using alternative change criteria (e.g., ΔCFI/ΔRMSEA) alongside the χ² difference tests.

The following analyses summarize measurement invariance results across the four nested CFA models (Configural → Metric → Scalar → Strict) and applies standard decision rules:

  1. Collect fit indices (chisq, df, RMSEA, TLI, CFI, SRMR) for each model and bind them into a single object.
  2. Compute change statistics relative to the immediately less constrained model: \(\Delta\chi^{2}\), \(\Delta df\), \(\Delta\text{CFI}\), \(\Delta\text{RMSEA}\), \(\Delta\text{SRMR}\).
  3. Apply invariance criteria (Cheung & Rensvold; Chen): retain invariance when \(\Delta\text{CFI} \ge -0.010\), \(\Delta\text{RMSEA} \le 0.015\), and \(\Delta\text{SRMR} \le 0.010\) (flags appear in the “Decision” column).
  4. Format a publication-ready table: rounds values, groups columns into “Absolute fit” and “Change vs. previous,” and adds a clarifying footnote with the decision rule.
# 1) Collect fit measures for each step
fit.stats <- rbind(
  fitmeasures(cfa.config, fit.measures = c("chisq","df","rmsea","tli","cfi","srmr")),
  fitmeasures(cfa.metric, fit.measures = c("chisq","df","rmsea","tli","cfi","srmr")),
  fitmeasures(cfa.scalar, fit.measures = c("chisq","df","rmsea","tli","cfi","srmr")),
  fitmeasures(cfa.strict, fit.measures = c("chisq","df","rmsea","tli","cfi","srmr"))
)
rownames(fit.stats) <- c("Configural","Metric","Scalar","Strict")

# 2) Build data frame and compute deltas vs. previous model
tab <- fit.stats %>%
  as.data.frame() %>%
  rownames_to_column(var = "Model") %>%
  mutate(across(c(chisq, df, rmsea, tli, cfi, srmr), as.numeric)) %>%
  mutate(
    d_chisq = c(NA, diff(chisq)),
    d_df    = c(NA, diff(df)),
    d_CFI   = c(NA, diff(cfi)),
    d_RMSEA = c(NA, diff(rmsea)),
    d_SRMR  = c(NA, diff(srmr))
  )

# 3) Invariance decision (Cheung & Rensvold / Chen rules of thumb)
#    Accept if: ΔCFI ≥ -0.010 AND ΔRMSEA ≤ 0.015 AND ΔSRMR ≤ 0.010 (for loadings)
ok <- function(dC, dR, dS) {
  if (is.na(dC) | is.na(dR) | is.na(dS)) return(NA)
  (dC >= -0.010) & (dR <= 0.015) & (dS <= 0.010)
}
tab$Decision <- c(NA,
                  ok(tab$d_CFI[2], tab$d_RMSEA[2], tab$d_SRMR[2]),
                  ok(tab$d_CFI[3], tab$d_RMSEA[3], tab$d_SRMR[3]),
                  ok(tab$d_CFI[4], tab$d_RMSEA[4], tab$d_SRMR[4]))

# 4) Round and format
tab_disp <- tab %>%
  mutate(
    across(c(chisq, rmsea, tli, cfi, srmr, d_chisq, d_CFI, d_RMSEA, d_SRMR), ~ round(.x, 3)),
    Decision = dplyr::case_when(
      is.na(Decision) ~ "—",
      Decision ~ "✓ Invariance retained",
      TRUE ~ "✗ Not retained"
    )
  ) %>%
  transmute(
    Model,
    `χ²` = chisq,
    df,
    RMSEA = rmsea,
    TLI   = tli,
    CFI   = cfi,
    SRMR  = srmr,
    `Δχ²` = d_chisq,
    `Δdf` = d_df,
    `ΔCFI` = d_CFI,
    `ΔRMSEA` = d_RMSEA,
    `ΔSRMR` = d_SRMR,
    Decision
  )

# 5) Render nice table
knitr::kable(
  tab_disp,
  caption = "Measurement invariance fit indices and changes across nested models",
  align = c("l","r","r","r","r","r","r","r","r","r","r","r","l"),
  booktabs = TRUE
) %>%
  kableExtra::kable_styling(full_width = FALSE, position = "center",
                            bootstrap_options = c("striped","hover","condensed")) %>%
  kableExtra::add_header_above(c(" " = 1, "Absolute fit" = 6, "Change vs. previous" = 5, " " = 1)) %>%
  kableExtra::footnote(
    general = "Decision rule: retain invariance if ΔCFI ≥ −0.010, ΔRMSEA ≤ 0.015, and ΔSRMR ≤ 0.010.",
    threeparttable = TRUE
  )
Measurement invariance fit indices and changes across nested models
Absolute fit
Change vs. previous
Model χ² df RMSEA TLI CFI SRMR Δχ² Δdf ΔCFI ΔRMSEA ΔSRMR Decision
Configural 158.614 105 0.051 0.988 0.991 0.060 NA NA NA NA NA
Metric 198.280 123 0.056 0.986 0.987 0.068 39.666 18 -0.004 0.005 0.008 ✓ Invariance retained
Scalar 340.427 141 0.084 0.968 0.966 0.093 142.147 18 -0.021 0.029 0.025 ✗ Not retained
Strict 560.538 161 0.112 0.943 0.932 0.121 220.110 20 -0.034 0.027 0.027 ✗ Not retained
Note:
Decision rule: retain invariance if ΔCFI ≥ −0.010, ΔRMSEA ≤ 0.015, and ΔSRMR ≤ 0.010.

Two Factors

giaf_invariance$project <- as.factor(giaf_invariance$project)
# Configurational model
cfa.config <- cfa(Twofactor,orthogonal=FALSE, data=giaf_invariance, estimator="ULS", group = "project")
resumen_cfa.config <- summary(cfa.config, fit.measures=TRUE)

# Metric model
cfa.metric <- cfa(Twofactor, orthogonal=FALSE, data = giaf_invariance, estimator = "ULS", group = "project", group.equal = "loadings")
resumen_cfa.metric <- summary(cfa.metric, fit.measures=TRUE)

# Scalar model
cfa.scalar <- cfa(Twofactor, orthogonal=FALSE, data = giaf_invariance, estimator = "ULS", group = "project", group.equal = c("loadings","intercepts"))
resumen_cfa.scalar <- summary(cfa.scalar, fit.measures=TRUE)

# Strict model
cfa.strict <- cfa(Twofactor, orthogonal=FALSE, data = giaf_invariance, estimator = "ULS", group = "project", group.equal = c("loadings","intercepts","residuals"))
resumen_cfa.strict <- summary(cfa.strict, fit.measures=TRUE)

#datos mejor organizados
lavaan::anova(cfa.strict,cfa.scalar,cfa.metric, cfa.config)
# 1) Collect fit measures for each step
fit.stats <- rbind(
  fitmeasures(cfa.config, fit.measures = c("chisq","df","rmsea","tli","cfi","srmr")),
  fitmeasures(cfa.metric, fit.measures = c("chisq","df","rmsea","tli","cfi","srmr")),
  fitmeasures(cfa.scalar, fit.measures = c("chisq","df","rmsea","tli","cfi","srmr")),
  fitmeasures(cfa.strict, fit.measures = c("chisq","df","rmsea","tli","cfi","srmr"))
)
rownames(fit.stats) <- c("Configural","Metric","Scalar","Strict")

# 2) Build data frame and compute deltas vs. previous model
tab <- fit.stats %>%
  as.data.frame() %>%
  rownames_to_column(var = "Model") %>%
  mutate(across(c(chisq, df, rmsea, tli, cfi, srmr), as.numeric)) %>%
  mutate(
    d_chisq = c(NA, diff(chisq)),
    d_df    = c(NA, diff(df)),
    d_CFI   = c(NA, diff(cfi)),
    d_RMSEA = c(NA, diff(rmsea)),
    d_SRMR  = c(NA, diff(srmr))
  )

# 3) Invariance decision (Cheung & Rensvold / Chen rules of thumb)
#    Accept if: ΔCFI ≥ -0.010 AND ΔRMSEA ≤ 0.015 AND ΔSRMR ≤ 0.010 (for loadings)
ok <- function(dC, dR, dS) {
  if (is.na(dC) | is.na(dR) | is.na(dS)) return(NA)
  (dC >= -0.010) & (dR <= 0.015) & (dS <= 0.010)
}
tab$Decision <- c(NA,
                  ok(tab$d_CFI[2], tab$d_RMSEA[2], tab$d_SRMR[2]),
                  ok(tab$d_CFI[3], tab$d_RMSEA[3], tab$d_SRMR[3]),
                  ok(tab$d_CFI[4], tab$d_RMSEA[4], tab$d_SRMR[4]))

# 4) Round and format
tab_disp <- tab %>%
  mutate(
    across(c(chisq, rmsea, tli, cfi, srmr, d_chisq, d_CFI, d_RMSEA, d_SRMR), ~ round(.x, 3)),
    Decision = dplyr::case_when(
      is.na(Decision) ~ "—",
      Decision ~ "✓ Invariance retained",
      TRUE ~ "✗ Not retained"
    )
  ) %>%
  transmute(
    Model,
    `χ²` = chisq,
    df,
    RMSEA = rmsea,
    TLI   = tli,
    CFI   = cfi,
    SRMR  = srmr,
    `Δχ²` = d_chisq,
    `Δdf` = d_df,
    `ΔCFI` = d_CFI,
    `ΔRMSEA` = d_RMSEA,
    `ΔSRMR` = d_SRMR,
    Decision
  )

# 5) Render nice table
knitr::kable(
  tab_disp,
  caption = "Measurement invariance fit indices and changes across nested models",
  align = c("l","r","r","r","r","r","r","r","r","r","r","r","l"),
  booktabs = TRUE
) %>%
  kableExtra::kable_styling(full_width = FALSE, position = "center",
                            bootstrap_options = c("striped","hover","condensed")) %>%
  kableExtra::add_header_above(c(" " = 1, "Absolute fit" = 6, "Change vs. previous" = 5, " " = 1)) %>%
  kableExtra::footnote(
    general = "Decision rule: retain invariance if ΔCFI ≥ −0.010, ΔRMSEA ≤ 0.015, and ΔSRMR ≤ 0.010.",
    threeparttable = TRUE
  )
Measurement invariance fit indices and changes across nested models
Absolute fit
Change vs. previous
Model χ² df RMSEA TLI CFI SRMR Δχ² Δdf ΔCFI ΔRMSEA ΔSRMR Decision
Configural 108.452 102 0.018 0.999 0.999 0.052 NA NA NA NA NA
Metric 147.700 118 0.036 0.994 0.995 0.061 39.247 16 -0.004 0.018 0.009 ✗ Not retained
Scalar 220.481 134 0.057 0.985 0.985 0.076 72.782 16 -0.010 0.021 0.015 ✗ Not retained
Strict 376.804 154 0.085 0.967 0.962 0.099 156.323 20 -0.023 0.028 0.023 ✗ Not retained
Note:
Decision rule: retain invariance if ΔCFI ≥ −0.010, ΔRMSEA ≤ 0.015, and ΔSRMR ≤ 0.010.

6. RELIABILITY

This section estimates internal consistency for the GIAF items in the total sample and by Project using indices appropriate for Likert-type data.

  • Uses the same item set and column order as giaf_df for all groups.
  • Removes incomplete rows (complete-case analysis) before computing statistics; if \(N<5\) it returns NA to avoid unstable estimates.
  • Computes a polychoric correlation matrix (appropriate for ordered-categorical items) and, from it:
    • Ordinal alpha: standardized Cronbach’s \(\alpha\) on the polychoric matrix (less attenuated than raw \(\alpha\) with ordinal data).
    • Split-half (Guttman \(\lambda_4\)): reliability based on the best split of items using the correlation matrix.
    • McDonald’s omega total (\(\omega_{\text{tot}}\)): model-based estimate of the proportion of total score variance attributable to all common factors.
  • Aggregates results for Total and the differents Projects into a single table and rounds values for reporting.
# --- Reliability table for TOTAL + ALL projects ---
# Ordinal Alpha (polychoric R), Split-Half (Guttman λ4), and Omega total (ωt)

# 1) Helper: compute reliability indices from an items-only data frame
reliab_from_items <- function(df_items) {
  df_items <- df_items %>% select(all_of(colnames(giaf_df)))   # ensure same item set/order
  df_items <- df_items[stats::complete.cases(df_items), , drop = FALSE]
  N <- nrow(df_items)
  if (N < 5) {
    return(tibble(Alpha_ord = NA_real_, SplitHalf = NA_real_, Omega_tot = NA_real_, N = N))
  }
  # Polychoric correlation (ordinal)
  pc <- psych::polychoric(df_items)$rho
  
  # Ordinal alpha (std.alpha on polychoric R)
  a_out <- suppressWarnings(psych::alpha(pc))
  alpha_ord <- unname(a_out$total$std.alpha)
  
  # Split-half (Guttman λ4) on correlation matrix
  sh_out <- suppressWarnings(psych::splitHalf(pc))
  lambda4 <- if (!is.null(sh_out$meanr)) unname(sh_out$meanr) else NA_real_
  
  # McDonald's Omega total
  om_out <- suppressWarnings(psych::omega(pc, plot = FALSE))
  omega_t <- if (!is.null(om_out$omega.tot)) unname(om_out$omega.tot) else NA_real_
  
  tibble(Alpha_ord = alpha_ord, SplitHalf = lambda4, Omega_tot = omega_t, N = N)
}

# 2) Prepare the three datasets (items must match giaf_df columns)
items_total  <- giaf_df
items_DRA <- df_DRA
items_Diskussionsforum   <- df_Diskussionsforum
items_CMHA   <- df_CMHA

# 3) Compute reliability for each group
rel_total  <- reliab_from_items(items_total)  %>% mutate(Group = "Total")
rel_DRA <- reliab_from_items(items_DRA) %>% mutate(Group = "DRA")
rel_Diskussionsforum   <- reliab_from_items(items_Diskussionsforum)   %>% mutate(Group = "Diskussionsforum")
rel_CMHA   <- reliab_from_items(items_CMHA)   %>% mutate(Group = "CMHA")

rel_table <- bind_rows(rel_total, rel_DRA, rel_Diskussionsforum, rel_CMHA) %>%
  relocate(Group) %>%
  mutate(across(c(Alpha_ord, SplitHalf, Omega_tot), ~ round(.x, 3)))

# 4) Display table
knitr::kable(
  rel_table,
  caption = "Reliability indices for GIAF items: Ordinal Alpha, Split-Half (Guttman λ4), and Omega total (ωt) for total sample and by gender",
  align = c("l","r","r","r","r")
) %>%
  kableExtra::kable_styling(full_width = FALSE, position = "center", bootstrap_options = c("striped","hover"))
Reliability indices for GIAF items: Ordinal Alpha, Split-Half (Guttman λ4), and Omega total (ωt) for total sample and by gender
Group Alpha_ord SplitHalf Omega_tot N
Total 0.926 0.926 0.949 599
DRA 0.926 0.926 0.950 241
Diskussionsforum 0.930 0.930 0.950 253
CMHA 0.955 0.955 0.977 105
# --- Reliability table (Total, Female, Male) using giaf_df, female_df, male_df ---
# Computes Ordinal Alpha (from polychoric R), Split-Half (Guttman λ4), and Omega total (ωt)

# Ítems por subescala (asegúrate de que existen esas columnas)
items_GIAF  <- c("REL1","REL2","REL3","REL4","ACC","APP")
items_GIAF2 <- c("PRAC1","PRAC2","EFF","VAL")

# ---------- 1) Helper de fiabilidad por subescala ----------
reliab_subscale <- function(df_items, scale_items, scale_name, ref_cols) {
  # mantener solo ítems definidos que existan y en el mismo orden
  scale_items <- intersect(scale_items, ref_cols)
  df_sub <- df_items[, scale_items, drop = FALSE]
  df_sub <- df_sub[stats::complete.cases(df_sub), , drop = FALSE]
  N <- nrow(df_sub)

  # si quedan menos de 2 ítems o N muy pequeño, devuelve NA
  if (length(scale_items) < 2 || N < 5) {
    return(tibble(Scale = scale_name, N = N,
                  Alpha_ord = NA_real_, SplitHalf = NA_real_, Omega_tot = NA_real_))
  }

  # policórica
  Rpoly <- psych::polychoric(df_sub)$rho

  # alfa ordinal
  a_out <- suppressWarnings(psych::alpha(Rpoly))
  alpha_ord <- unname(a_out$total$std.alpha)

  # split-half (Guttman)
  sh_out <- suppressWarnings(psych::splitHalf(Rpoly))
  lambda4 <- if (!is.null(sh_out$meanr)) unname(sh_out$meanr) else NA_real_

  # omega total
  om_out <- suppressWarnings(psych::omega(Rpoly, plot = FALSE))
  omega_t <- if (!is.null(om_out$omega.tot)) unname(om_out$omega.tot) else NA_real_

  tibble(Scale = scale_name, N = N,
         Alpha_ord = alpha_ord, SplitHalf = lambda4, Omega_tot = omega_t)
}

# ---------- 2) Helper: fiabilidad para las dos subescalas en un grupo ----------
reliab_for_group <- function(df_group, ref_cols, group_label) {
  bind_rows(
    reliab_subscale(df_group, items_GIAF,  "GIAF",  ref_cols),
    reliab_subscale(df_group, items_GIAF2, "GIAF2", ref_cols)
  ) |>
    mutate(Group = group_label, .before = 1)
}

# ---------- 3) Preparar datasets ----------
items_total <- giaf_df
items_DRA   <- df_DRA
items_Diskussionsforum <- df_Diskussionsforum
items_CMHA   <- df_CMHA

# columnas de referencia (del total) para asegurar mismo set/orden
ref_cols <- colnames(giaf_df)

# ---------- 4) Calcular tablas ----------
rel_total <- reliab_for_group(items_total, ref_cols, "Total")
rel_DRA   <- reliab_for_group(items_DRA,   ref_cols, "DRA")
rel_Diskussionsforum <- reliab_for_group(items_Diskussionsforum, ref_cols, "Diskussionsforum")
rel_CMHA <- reliab_for_group(items_CMHA, ref_cols, "CMHA")

rel_table <- bind_rows(rel_total, rel_DRA, rel_Diskussionsforum, rel_CMHA) %>%
  mutate(across(c(Alpha_ord, SplitHalf, Omega_tot), ~ round(.x, 3)))

# ---------- 5) Mostrar ----------
knitr::kable(
  rel_table,
  caption = "Reliability (polychoric): Ordinal Alpha, Split-Half (Guttman λ4) and Omega total for GIAF and GIAF2 by group",
  col.names = c("Group","Scale","N","Alpha (ord.)","Split-Half (λ4)","Omega total"),
  align = c("l","l","r","r","r","r")
) |>
  kableExtra::kable_styling(full_width = FALSE, position = "center",
                            bootstrap_options = c("striped","hover"))
Reliability (polychoric): Ordinal Alpha, Split-Half (Guttman λ4) and Omega total for GIAF and GIAF2 by group
Group Scale N Alpha (ord.) Split-Half (λ4) Omega total
Total GIAF 599 0.920 0.920 0.956
Total GIAF2 599 0.832 0.832 0.884
DRA GIAF 241 0.935 0.935 0.970
DRA GIAF2 241 0.797 0.797 0.866
Diskussionsforum GIAF 253 0.908 0.908 0.944
Diskussionsforum GIAF2 253 0.849 0.849 0.872
CMHA GIAF 105 0.944 0.944 0.965
CMHA GIAF2 105 0.908 0.908 0.960

7. NETWORK ANALYSIS

7.1. Community Identification

A bootstrap exploratory graph analysis (EGA) was performed to estimate the number, structure, and stability of variable communities (i.e., network dimensions) based on their structural consistency. In this context, a community refers to a cluster of nodes that exhibit strong interconnections through edges.

7.1.1 Community Estimation

The analysis facilitates the identification and delineation of such communities, as well as the allocation of individual items to their corresponding dimensions.

set.seed(123)
communities <- bootEGA(giaf_df,
                       model = "glasso",
                       type = "resampling",
                       iter = 1000,
                       typicalStructure = TRUE,
                       plot.typicalStructure = TRUE)
Figure 1. Community Estimation

Figure 1. Community Estimation

7.1.2. Community Stability

Items were retained within their assigned communities only when their stability coefficients exceeded 0.70. Items falling below this threshold were excluded due to their potential to undermine the structural integrity of the corresponding dimension.

# Structural consistency
communities.dimstab <- dimensionStability(communities)
Figure 2. Community Stability

Figure 2. Community Stability

7.2. Gaussian Graphical Models (GGMs)

7.2.1. Network Estimation

#Setting features
#Global Impact Analytics Framework
pieColor <- c(rep("#EB9446", length(giaf_df))) # Da color al borde del nodo color naranja
group_list <- c(rep('Relevance',4),rep('Acceptability',1),rep('Applicability',1),rep('Practicality',2),
                rep('Efficiency',1), rep('Value',1))
items <- names(giaf_df)
item_labels <- c(
  "Outcomes are relevant for users",
  "Meaningful to the target audience",
  "New/different/interesting",
  "Strong potential for use here",
  "Acceptable and approved for this purpose",
  "Useful in this local setting",
  "User-friendly",
  "Realistic training requirements",
  "Better results than similar applications here",
  "Fit-for-purpose and valuable vs. alternatives"
)

GIAF_poly <- polychoric(giaf_df)

# Graficar red
suppressMessages(suppressWarnings({
GIAF_glasso <- qgraph::qgraph(
  GIAF_poly$rho,
  graph        = "glasso",
  layout       = "spring",
  groups       = group_list,
  palette      = "ggplot2",
  nodeNames    = item_labels,
  labels       = TRUE,
  threshold    = TRUE,
  sampleSize   = nrow(giaf_df),
  vsize        = 5,
  esize        = 5,
  asize        = 6,
  border.width = 1,
  border.color = "black",
  unCol        = "black",
  theme        = "colorblind",
  legend       = TRUE,          # <‑‑ leyenda automática
  legend.cex   = 0.8,           # tamaño texto de la leyenda
  layoutScale  = c(0.8, 0.8),   # encoge ligeramente la red
  rescale      = TRUE
)
}))
Figure 3. GIAF Gaussian Graphical Model Estimation

Figure 3. GIAF Gaussian Graphical Model Estimation

possible_edges <- (ncol(giaf_df)*(ncol(giaf_df)-1))/2
real_edges <- length(GIAF_glasso$Edgelist$weight)
density <- round(real_edges/possible_edges*100, digits = 2)

# Print explanation and result
cat("Number of potential edges:", possible_edges, "\n")
## Number of potential edges: 45
cat("Number of real edges:", real_edges, "\n")
## Number of real edges: 24
cat("Network density (percentage of actual edges out of all possible undirected edges):", density, "%\n")
## Network density (percentage of actual edges out of all possible undirected edges): 53.33 %
# Extract edge weights
edge_weights <- GIAF_glasso$Edgelist$weight

# Calculate summary statistics
mean_weight <- round(mean(edge_weights), digits = 3)
sd_weight <- round(sd(edge_weights), digits = 3)
max_weight <- round(max(edge_weights), digits = 3)
min_weight <- round(min(edge_weights), digits = 3)

# Print explanation and results clearly
cat("Standard Edge Weight \n")
## Standard Edge Weight
cat("The average edge weight was:", mean_weight, "(SD:", sd_weight, ")\n")
## The average edge weight was: 0.186 (SD: 0.156 )
cat("Minimum weight:", min_weight, "| Maximum weight:", max_weight, "\n")
## Minimum weight: 0.064 | Maximum weight: 0.677
# Calculate summary statistics
mean_weight <- round(mean(abs(edge_weights)), digits = 3)
sd_weight <- round(sd(abs(edge_weights)), digits = 3)
max_weight <- round(max(abs(edge_weights)), digits = 3)
min_weight <- round(min(abs(edge_weights)), digits = 3)

# Print explanation and results clearly
cat("\n")
cat("Absolute Values \n")
## Absolute Values
cat("The average edge weight was:", mean_weight, "(SD:", sd_weight, ")\n")
## The average edge weight was: 0.186 (SD: 0.156 )
cat("Minimum weight:", min_weight, "| Maximum weight:", max_weight, "\n")
## Minimum weight: 0.064 | Maximum weight: 0.677

7.2.2. Network characterization

GIAF_matrix <- as.matrix(giaf_df)
invisible(capture.output({SV_mgm <-mgm(
    data = GIAF_matrix,
    type = rep("g", length(giaf_df)),
    level = rep(1, length(giaf_df)),
    k = 2,
    verbatim = TRUE,
    warnings = TRUE
  )
}))

#Predice la varianza explicada de cada nodo y otros parametros
SV_predic <- predict(SV_mgm, giaf_df, error.continuous = "VarExpl")

suppressMessages(suppressWarnings({
GIAF_glasso_predicted <- qgraph::qgraph(
  cor_auto(giaf_df),
  graph        = "glasso",
  layout       = "spring",
  groups       = group_list,
  palette      = "ggplot2",
  nodeNames    = item_labels,
  labels       = TRUE,
  threshold    = TRUE,
  sampleSize   = nrow(giaf_df),
  pie          = SV_predic$errors$R2,
  pieColor     = pieColor,
  vsize        = 5,
  esize        = 5,
  asize        = 6,
  border.width = 0.9,
  border.color = "black",
  unCol        = "black",
  theme        = "colorblind",
  legend       = TRUE,          # <‑‑ leyenda automática
  legend.cex   = 0.8,           # tamaño texto de la leyenda
  layoutScale  = c(0.8, 0.8),   # encoge ligeramente la red
  rescale      = TRUE
)
}))
Figure 4. Gaussian Graphical Model Characterization

Figure 4. Gaussian Graphical Model Characterization

7.3. Centrality Measures

7.3.1. Centality Estimation

centrality_GIAF <- centralityPlot(GIAF_glasso, include = 
                                  c("Betweenness","Closeness","Strength","ExpectedInfluence"),
                                orderBy ="Betweenness", scale = "z-scores")
Figure 5. Centrality Measures Estimation

Figure 5. Centrality Measures Estimation

7.3.2. Centrality Stability

# 1. Estimar la red con EBICglasso
glasso_net <- estimateNetwork(giaf_df, 
                              default = "EBICglasso", 
                              corMethod = "cor_auto",
                              verbose = FALSE)

# 2. Bootstrap de estabilidad (case-dropping)
set.seed(2025)
boot_results <- invisible(bootnet(
  estimateNetwork(giaf_df, default = "EBICglasso", corMethod = "cor_auto"),
  type      = "case",
  nBoots    = 200,
  nCores    = 1,        # importante para ver mensajes
  statistics= c("strength", "closeness", "betweenness","expectedInfluence"),  # evita centralidades inestables
  verbose   = FALSE,
  maxErrors = 1000
))

# 3. Plot de estabilidad de centralidad (bootstrap case-dropping)
plot(boot_results,
     statistics = c("strength", "closeness", "betweenness","expectedInfluence"),
     labels = TRUE, order = "sample", legend = TRUE,
     color = "darkblue", line = TRUE,
     main = "Bootstrap Centrality Stability (Case Dropping)")
Figure 6. Centrality Measures Stability

Figure 6. Centrality Measures Stability

# 4. Coeficientes de estabilidad de centralidad
cs_coeffs <- corStability(boot_results, verbose = FALSE)

# Crear tabla
cs_table <- data.frame(
  Centrality = c("Betweenness", "Closeness", "Strength", "ExpectedInfluence"),
  CS_Coefficient = round(c(cs_coeffs[1], cs_coeffs[2], cs_coeffs[4], cs_coeffs[3]), 3)
)
row.names(cs_table) <- 1:4

# Mostrar como tabla en R Markdown
knitr::kable(cs_table, caption = "Table 10: Centrality Stability Coefficients (CS)",
align = c("r","r","r","r")) %>%
  kableExtra::kable_styling(full_width = FALSE, position = "center", bootstrap_options = c("striped","hover"))
Table 10: Centrality Stability Coefficients (CS)
Centrality CS_Coefficient
Betweenness 0.000
Closeness 0.205
Strength 0.594
ExpectedInfluence 0.594

8. NETWORK COMPARISON

8.1. Networks by Project

Diskussionsforum’s Network

# --- Diskussionsforum GROUP ---

# Convert data to matrix
GIAF_Diskussionsforum_matrix <- as.matrix(Diskussionsforum_df)

# Estimate MGM model (defines number of variables, order of interactions, and input data)
invisible(capture.output({
  GIAF_mgm_Diskussionsforum <- mgm(
    data = GIAF_Diskussionsforum_matrix,
  type = rep("g", length(Diskussionsforum_df)),
  level = rep(1, length(Diskussionsforum_df)),
  k = 2,
  warnings = TRUE,
  verbatim = TRUE
  )
}))

# Predict explained variance (R²) and other node-level metrics
GIAF_predic_Diskussionsforum <- predict(
  GIAF_mgm_Diskussionsforum,
  Diskussionsforum_df,
  error.continuous = "VarExpl"
)

suppressMessages(suppressWarnings({
Diskussionsforum_glasso_predicted <- qgraph::qgraph(
  cor_auto(Diskussionsforum_df),
  graph        = "glasso",
  layout       = "spring",
  groups       = group_list,
  palette      = "ggplot2",
  nodeNames    = item_labels,
  labels       = TRUE,
  threshold    = TRUE,
  sampleSize   = nrow(Diskussionsforum_df),
  pie          = GIAF_predic_Diskussionsforum$errors$R2,
  pieColor     = pieColor,
  vsize        = 5,
  esize        = 5,
  asize        = 6,
  border.width = 0.9,
  border.color = "black",
  unCol        = "black",
  theme        = "colorblind",
  legend       = TRUE,          # <‑‑ leyenda automática
  legend.cex   = 0.8,           # tamaño texto de la leyenda
  layoutScale  = c(0.8, 0.8),   # encoge ligeramente la red
  rescale      = TRUE
)
}))
Figure 9. Network of Diskussionsforum Project

Figure 9. Network of Diskussionsforum Project

DRA’s network

# --- Diskussionsforum GROUP ---

# Convert data to matrix
GIAF_DRA_matrix <- as.matrix(DRA_df)

# Estimate MGM model (defines number of variables, order of interactions, and input data)
invisible(capture.output({
  GIAF_mgm_DRA <- mgm(
    data = GIAF_DRA_matrix,
  type = rep("g", length(DRA_df)),
  level = rep(1, length(DRA_df)),
  k = 2,
  warnings = TRUE,
  verbatim = TRUE
  )
}))

# Predict explained variance (R²) and other node-level metrics
GIAF_predic_DRA <- predict(
  GIAF_mgm_DRA,
  DRA_df,
  error.continuous = "VarExpl"
)

suppressMessages(suppressWarnings({
DRA_glasso_predicted <- qgraph::qgraph(
  cor_auto(DRA_df),
  graph        = "glasso",
  layout       = "spring",
  groups       = group_list,
  palette      = "ggplot2",
  nodeNames    = item_labels,
  labels       = TRUE,
  threshold    = TRUE,
  sampleSize   = nrow(DRA_df),
  pie          = GIAF_predic_DRA$errors$R2,
  pieColor     = pieColor,
  vsize        = 5,
  esize        = 5,
  asize        = 6,
  border.width = 0.9,
  border.color = "black",
  unCol        = "black",
  theme        = "colorblind",
  legend       = TRUE,          # <‑‑ leyenda automática
  legend.cex   = 0.8,           # tamaño texto de la leyenda
  layoutScale  = c(0.8, 0.8),   # encoge ligeramente la red
  rescale      = TRUE
)
}))
Figure 10. Network of DRA Project

Figure 10. Network of DRA Project

CMHA’s Network

# --- CMHA GROUP ---

# Convert data to matrix
GIAF_CMHA_matrix <- as.matrix(CMHA_df)

# Estimate MGM model (defines number of variables, order of interactions, and input data)
invisible(capture.output({
  GIAF_mgm_CMHA <- mgm(
    data = GIAF_CMHA_matrix,
  type = rep("g", length(CMHA_df)),
  level = rep(1, length(CMHA_df)),
  k = 2,
  warnings = TRUE,
  verbatim = TRUE
  )
}))

# Predict explained variance (R²) and other node-level metrics
GIAF_predic_CMHA <- predict(
  GIAF_mgm_CMHA,
  CMHA_df,
  error.continuous = "VarExpl"
)

suppressMessages(suppressWarnings({
CMHA_glasso_predicted <- qgraph::qgraph(
  cor_auto(CMHA_df),
  graph        = "glasso",
  layout       = "spring",
  groups       = group_list,
  palette      = "ggplot2",
  nodeNames    = item_labels,
  labels       = TRUE,
  threshold    = TRUE,
  sampleSize   = nrow(CMHA_df),
  pie          = GIAF_predic_CMHA$errors$R2,
  pieColor     = pieColor,
  vsize        = 5,
  esize        = 5,
  asize        = 6,
  border.width = 0.9,
  border.color = "black",
  unCol        = "black",
  theme        = "colorblind",
  legend       = TRUE,          # <‑‑ leyenda automática
  legend.cex   = 0.8,           # tamaño texto de la leyenda
  layoutScale  = c(0.8, 0.8),   # encoge ligeramente la red
  rescale      = TRUE
)
}))
Figure 11. Network of CMHA Project

Figure 11. Network of CMHA Project

8.2. Centrality Measures

To evaluate the invariance of the centrality measures network across genders.

# Compare centralities between 3 networks
network_comparison <- compareCentrality3_color(
  Diskussionsforum_glasso_predicted,  # net1
  DRA_glasso_predicted,               # net2
  CMHA_glasso_predicted,              # net3
  include    = "all",
  legendName = "Centrality Measures by Project",
  net1Name   = "Diskussionsforum",
  net2Name   = "DRA",
  net3Name   = "CMHA"
)
network_comparison  # prints the ggplot
Figure 11. Centrality comparison between female and male networks

Figure 11. Centrality comparison between female and male networks

9. BAYESIAN NETWORK

The construction of the Bayesian network (BN) is carried out in two steps. Firstly, the estimation of the directed acyclic graph (DAG) is performed, and secondly, the BN model is fitted and validated with the study dataset.

9.1. DAG Estimation

For the DAG estimation phase, the PC Stable algorithm with no restrictions was employed. To ensure stability in the obtained DAG, a total of 200 bootstrap samples were drawn, and only the edges with a strength greater than 0.85 and a direction greater than 0.5 were retained in the final model.

set.seed(1234)
bootstr <- boot.strength(giaf_df, R = 500, algorithm = "pc.stable")

bootstr <- bootstr[bootstr$strength >= 0.85 & bootstr$direction >= 0.5,]

avgnet <- averaged.network(bootstr, threshold = 0.85)

sp <- strength.plot(avgnet, bootstr, shape = "ellipse", render = FALSE)

GIAF_DAG_qgraph <- qgraph::qgraph(
  sp, layout = "spring",
  groups       = group_list,
  palette      = "ggplot2",
  nodeNames    = item_labels,
  labels       = TRUE,
  threshold    = FALSE,
  sampleSize   = nrow(giaf_df),
  vsize        = 4.5,
  esize        = 3,
  asize        = 3,
  border.width = 0.9,
  border.color = "black",
  unCol        = "black",
  theme        = "colorblind",
  legend       = TRUE,          # <‑‑ leyenda automática
  legend.cex   = 0.8,           # tamaño texto de la leyenda
  layoutScale  = c(0.8, 0.8),   # encoge ligeramente la red
  rescale      = TRUE
)
Figure 12. Directed Acyclic Graph (DAG)

Figure 12. Directed Acyclic Graph (DAG)

9.2. Bayesian Network Validation

To fit and validate the BN model in the second phase of the process, the dataset was subdivided into 10 folds. A routine was implemented in which 90% of the folds were used to train the model, and the remaining 10% were used for testing. This cross-validation routine was repeated until all potential combinations were explore.

pieColor <- c(rep("#EB9446", length(giaf_df))) # Da color al borde del nodo color naranja                        
#Cross-validation (Check spatial cross-validation: blockCV)
set.seed(1234)
# Split data in 5 sets
kf <- dismo::kfold(nrow(giaf_df), k = 10) # k-fold partitioning
models_fit <- c()

for (var in names(giaf_df)) {
  kfold_fit <- c()
  aux_base <- data.frame('variable'=var)
  if(is.numeric(giaf_df[[var]])){
    for(k in 1:10) {
      test <- giaf_df[kf == k, ]
      train <- giaf_df[kf != k, ]
      
      training <- bn.fit(avgnet,train)
      pred <- predict(training, node = var, data = test)
      
      z <- data.frame( R2 = R2(pred, test[[var]]),
                       RMSE = RMSE(pred, test[[var]]),
                       MAE = MAE(pred, test[[var]]))
      if(is.na(z$R2)){
        z[is.na(z$R2),] <- 0
      }
      kfold_fit <- rbind(kfold_fit, z) 
    }
    
    z <- apply(kfold_fit, 2, mean)
    z <- cbind(aux_base,as.data.frame(t(z)))
    models_fit <- rbind(models_fit, z)
  }else{
    for(k in 1:10) {
      # Split data in test and train
      test <- giaf_df[kf == k, ]
      train <- giaf_df[kf != k, ]
      
      training <- bn.fit(avgnet,train)
      pred <- predict(training, node = var, data = test)
      
      validation <- confusionMatrix(as.factor(pred),test[[var]])
      z1 <- as.data.frame(t(validation$overall))
      
      if(length(levels(giaf_df[[var]])) > 2){
        z2 <- as.data.frame(t(sapply(as.data.frame(validation$byClass),mean))) 
      }else{
        z2 <- as.data.frame(t(sapply(validation$byClass,mean))) 
      }
      z <- cbind(z1, z2)
      
      kfold_fit <- rbind(kfold_fit, z)
    }
    z <- apply(kfold_fit, 2, mean)
    z <- cbind(aux_base,z)
    models_fit <- rbind(models_fit, z)
  }
}

GIAF_BN_qgraph <- qgraph::qgraph(
  sp, layout = "spring",
  groups       = group_list,
  palette      = "ggplot2",
  nodeNames    = item_labels,
  pieColor = pieColor,
  pie = models_fit$R2,
  labels       = TRUE,
  threshold    = FALSE,
  sampleSize   = nrow(giaf_df),
  vsize        = 4.5,
  esize        = 3,
  asize        = 3,
  border.width = 0.9,
  border.color = "black",
  unCol        = "black",
  theme        = "colorblind",
  legend       = TRUE,          # <‑‑ leyenda automática
  legend.cex   = 0.8,           # tamaño texto de la leyenda
  layoutScale  = c(0.8, 0.8),   # encoge ligeramente la red
  rescale      = TRUE
)
Figure 13. Bayesian Network (BN) Model

Figure 13. Bayesian Network (BN) Model

10. PREDICTAIBILITY CAPACITY

Across the partial-correlation networks (GGMs), predictability was highest for Item 4 in the full sample (R² = 0.720; RMSE = 0.528) and remained high in both females (R² = 0.735; RMSE = 0.513) and males (R² = 0.734; RMSE = 0.514). The lowest GGM predictability occurred for Item 2 in the general sample (R² = 0.453; RMSE = 0.739). Item 1 also showed comparatively modest R² in the general (0.497) and female (0.465) subsamples, with the highest female RMSE (0.729) among items. The largest gender gap in GGM explained variance appeared for Item 6 (Female R² = 0.763 vs. Male R² = 0.668; ΔR² ≈ 0.095), whereas Item 4 showed virtually no gender difference (ΔR² ≈ 0.001).

For the Bayesian Network (BN), estimates were available for Items 2, 3, 4, and 6. Within the BN, Item 4 had the highest explained variance (R² = 0.633), followed by Item 3 (R² = 0.625) and Item 6 (R² = 0.599), while Item 2 had the lowest (R² = 0.358). In terms of error, Item 3 showed the lowest RMSE (0.723) among the BN results, whereas Item 2 had the largest prediction error (RMSE = 0.916). (See Table 12.)

# Item labels
items <- paste("Item", 1:10)

# GGM - General
ggm_general <- SV_predic$errors %>%
  transmute(
    Variable = items,
    R2_gen = round(R2, 3),
    RMSE_gen = round(RMSE, 3)
  )

# GGM - Diskussionsforum
ggm_Diskussionsforum <- GIAF_predic_Diskussionsforum$errors %>%
  transmute(
    R2_Diskussionsforum = round(R2, 3),
    RMSE_Diskussionsforum = round(RMSE, 3)
  )

# GGM - DRA
ggm_DRA <- GIAF_predic_DRA$errors %>%
  transmute(
    R2_DRA = round(R2, 3),
    RMSE_DRA = round(RMSE, 3)
  )

# GGM - CMHA
ggm_CMHA <- GIAF_predic_CMHA$errors %>%
  transmute(
    R2_CMHA = round(R2, 3),
    RMSE_CMHA = round(RMSE, 3)
  )

# BN model
bn <- models_fit %>%
  transmute(
    R2_bn = ifelse(R2 == 0, "-", round(R2, 3)),
    RMSE_bn = ifelse(RMSE == 0, "-", round(RMSE, 3))
  )

# Combine all safely
final_table <- bind_cols(ggm_general, ggm_Diskussionsforum, ggm_DRA, ggm_CMHA, bn)

# Set final column names to just "R²" and "RMSE" per section
names(final_table) <- c(
  "Variable",
  rep(c("R²", "RMSE"), times = 5)  # General, Female, Male, BN
)

# Render with multi-level header
final_table %>%
  kable(align = "lcccccccc", booktabs = TRUE,
        caption = "Table 12. Explained variance and RMSE of partial-correlation network (GGM) and BN models") %>%
  add_header_above(c(" " = 1, 
                     "General" = 2, 
                     "Diskussionsforum" = 2, 
                     "DRA" = 2, 
                     "CMHA" = 2,
                     " " = 2)) %>%
    add_header_above(c(" " = 1, 
                     "GGMs" = 8, 
                     "BN" = 2)) %>%
  kable_styling(full_width = FALSE, position = "center")
Table 12. Explained variance and RMSE of partial-correlation network (GGM) and BN models
GGMs
BN
General
Diskussionsforum
DRA
CMHA
Variable RMSE RMSE RMSE RMSE RMSE
Item 1 0.662 0.581 0.650 0.590 0.618 0.617 0.803 0.442
Item 2 0.654 0.588 0.601 0.631 0.679 0.566 0.768 0.480 0.615 0.593
Item 3 0.429 0.755 0.456 0.736 0.520 0.692 0.518 0.691 0.361 0.772
Item 4 0.500 0.706 0.451 0.740 0.554 0.667 0.699 0.546 0.437 0.734
Item 5 0.573 0.653 0.509 0.699 0.608 0.625 0.755 0.493
Item 6 0.581 0.646 0.602 0.630 0.540 0.677 0.771 0.476 0.491 0.646
Item 7 0.381 0.786 0.395 0.776 0.331 0.816 0.748 0.500 0.347 0.904
Item 8 0.428 0.756 0.455 0.737 0.375 0.789 0.719 0.528 0.287 0.803
Item 9 0.417 0.763 0.415 0.763 0.340 0.811 0.705 0.541 0.395 0.89
Item 10 0.564 0.660 0.601 0.631 0.522 0.690 0.662 0.579